The Secretary Said: Make it So.

Can Change Management Theory Explain the Challenge of Achieving Enduring Public Sector Management Reform?

A Case Study of the 1980s Management Reform: Evaluation in the Australian Public Service.

Peter Graves

A Thesis submitted for the degree of Doctor of Philosophy

School of Business

February 2020

Page | i

Page | ii

Page | iii

Page | iv

Page | v

ABSTRACT

Over thirty years, the Australian Public Service (APS) has been subject to many management and performance reforms. It was assumed that APS departmental chief executives – Secretaries - could implement reform changes uniformly and effectively across the large and dispersed APS. However, the central Finance Department recently concluded that a performance focus had not become embedded in the management of the APS. This thesis investigates the practice of implementing APS management reforms between 1984 and 2013 and identifies five factors that might explain the failure to embed the specific performance reform of program evaluation.

The first factor involved a failure to bridge the gap between the APS centre and its eighteen constituent Departments, whose individual Secretaries exercise considerable management autonomy. The second was the gap between the Secretary commencing top-down reform from the centre in and the outcomes following implementation influenced by the APS’s geographic dispersion. A third factor involved the often-short term timeframe of many reforms, highlighting the need to consider implementing reform over an extended timeframe, potentially over decades, for the reform to become embedded. Fourth, was the failure of change management consolidators to maintain the momentum of reform changes over the long-term. Fifth, management reforms were often not designed or implemented effectively, highlighting the lack of evaluation program logic underpinning APS management reform.

The thesis highlights the challenge for management reform in the APS if the senior change agent, the Secretary, did not stay in position long enough for the reform to become embedded. This study makes contributions to enhancing public sector management reform effectiveness in the APS by examining the consequences of geographic dispersion, the time frame required to embed reforms, the role of change consolidators, and the role of evaluation design in public sector reform.

Page | vi

DEDICATION

This thesis is dedicated to the Australian education system, to the teachers of knowledge within it, to the Australian taxpayers who help fund it and to those who have graduated from it.

In particular, this is dedicated to my sister Elaine Langshaw – a life-long teacher in the state schools of .

And to Associate Professor Simon Barraclough, Adjunct Associate Professor, Public Health, La Trobe University. Professor Barraclough inspired me during my studies culminating in my Master’s of Public Health and repeatedly challenged me to continue with further study.

Contributors all to the common wealth of .

Page | vii

ACKNOWLEDGEMENTS

First and foremost, I want to thank my long-suffering and patient supervisor, Professor Deborah Blackman who has, over the past four years, steered me through the perils of academic analysis and writing. This helped me greatly in making the transition from my former membership of the Australian Public Service and a rather different style of writing. Your comments on potential sources of literature were equally valuable. Professor Blackman provided significant commentary and helpful suggestions that steered me through those challenging years.

In his supervision, Professor Michael O’Donnell was also able and willing to advise on structure, expression and content, while maintaining his commendable sense of humour. Your comments were always supportive and helped me to focus on the next steps of this thesis.

My next thanks go to the late Professor Kerry Jacobs of the School of Business, UNSW Canberra, who initially accepted my candidature. Kerry’s engaging style immediately caught my attention and I enjoyed our preliminary robust discussions. This thesis is my tribute to his belief in my research capacities.

I would especially like to thank the thirty-two people who agreed to be interviewed. They represented significant ranges of government, national audit, APS and research experiences in public sector management reform over the past thirty-five years. Their reflective comments enabled me to understand how difficult it can be to achieve permanent management reform in the Australian Public Service.

I also want to thank both Adjunct Professor Wendy Jarvie of the School of Business, UNSW Canberra and Dr Tarek Rana, now at RMIT University. Dr Jarvie established my initial interest in this research while I was still employed in the APS, by encouraging me to contact Dr Rana as he was undertaking his PhD research at UNSW Canberra.

Finally, I want to thank my wife Jenny, for your support and empathy over the years as I made my way through the many versions of each chapter.

Many thanks to you all. Page | viii

LIST OF FIGURES

Figure Number TITLE Page

1 Timelines of APS Management Reform and their Impacts 20 2 Locations of APS Staff Nationally 24 3 A Typical APS Departmental Structure 26 4 Core APS Competencies for APS Middle Managers 33 5 30 Years of Similarities in Management Reform Timelines 35 6 A Model of the Policy Implementation Process 48 7 Program Logic 55 8 Four Criteria of Guba. As Applied in this Thesis 99 9 Backgrounds of Interviewees 107 10 Open Codes and Ranked Order 116 11 Nine Axial Codes 117 12 Research Themes Summarised 118 13 Common Factors in Initiating APS Management Reform: 1983-2013 123 14 Embedding Reform and Evaluations: Times Recorded 125 15 Change Management: Times Recorded 127 16 Change Agents: Times Recorded 134 17 Extended Time in Theory: Times Recorded 146 18 Secretary of the Department of Prime Minister & Cabinet (PM&C) and Head of the Australian Public Service – Tenures and Medians 1912-2019 157 19 Ranked Priorities: Former Secretaries; All Others 160 20 Similarities in Management Reforms 168 21 The Timelines of APS Management Reform: 1983- 2015/18 169 22 Implementing Policy for the Long-Term – with Demonstrable Impact 172 23 The Jigsaw of Effective Implementation 193

Page | ix

LIST OF ABBREVIATIONS

AES Australian Evaluation Society AGRAGA Advisory Group on Reform of Administration ANAO Australian National Audit Office ANZSOG Australia and New Zealand School of Government APS Australian Public Service APSC Australian Public Service Commission CEO Chief Executive Officer CIU Cabinet Implementation Unit CPSA Commonwealth Public Service Act 1902 DoF Department of Finance DSS Department of Social Security FMIP Financial Management Improvement Program Health Department of Health Industry Department of Industry and Science Infrastructure Department of Infrastructure and Regional Development IPAA Institute of Public Administration, Australia JCPAA Joint Committee of Public Accounts and Audit MAB Management Advisory Board MAC Management Advisory Committee MfR Managing for Results PEP Portfolio Evaluation Plans PGPA Public Governance, Performance and Accountability Act, 2013 PM&C Department of Prime Minister and Cabinet PSA Public Service Act 1999 PSC Public Service Commission (state government entities) RCAGA Royal Commission on Australian Government Administration SCFPA Standing Committee on Finance and Public Administration, SES Senior Executive Service TFMI Task Force on Management Improvement

Page | x

TABLE OF CONTENTS

THESIS/DISSERTATION SHEET I ORIGINALITY STATEMENT II INCLUSION OF PUBLICATIONS STATEMENT III COPYRIGHT STATEMENT; AUTHENTICITY STATEMENT IV ABSTRACT V DEDICATION VI ACKNOWLEDGEMENTS VII LIST OF FIGURES VIII LIST OF ABBREVIATIONS IX

Chapter 1: Implementing Management Reform to Change the Australian Public Service (APS) 1.1 Embedding Public Sector Management Reform 1 1.2 Introduction to APS Management Reform 3 1.3 Managerial Responsibilities of APS Secretaries Have Changed since 2013 5 1.4 Are Current Implementation Frameworks Limited? 7 1.5 Change Over Extended Time 11 1.6 Study Aims and Research Questions 12 1.7 Research Setting: The APS 12 1.8 Thesis Overview 16 1.9 Conclusion 18

Chapter 2: Management Reform and The Australian Public Service 2.1 Introduction to the Australian Public Service (APS) 20 2.2 Timelines of APS Management Reforms and their Impacts 20 2.3 Implementing Managing for Results and Program Evaluation 22 2.4 The APS: Staff, Centre, Management and Departmental Structure 24 2.5 APS Management Reforms and Change Management Theory 26 2.6 Two Alternative Models: Agency Annual Reports; ANAO Audits 29 2.7 Reforming the Skills of APS Staff 32 2.8 Past Management Reforms: Coming Around Again 34 2.9 Implementing Management Reform of the Single APS 38 2.10 Making APS Management Reform Stick 40 Page | xi

2.11 Public Sector Change Management in the APS. 41 2.12 Conclusions 43

Chapter 3: Origins of the Theories: Implementation and Public Sector Reform 3.1 Introduction 46 3.2 Initial Implementation Theory 47 3.3 Change Management Theory and Implementing Management Reform 49 3.4 Organisation Theory and Extended Time 53 3.5 Evaluation Theory 54 3.6 Success in Implementing Management Reform 56 3.7 Making Successful Reform Stick 58 3.8 Current Gaps in Implementation Theory 59 3.9 Relevance of Gaps in Theory to Current APS Practice 63 3.10 Insights from this Review 67

Chapter 4: Re-Framing Implementation Theory 4.1 Introduction 73 4.2 Re-Framing Implementation Theory through APS Practice 74 4.3 Existing Implementation Frameworks 75 4.4 Evaluation Framework and Success 77 4.5 Factoring Extended Time into Implementation Success. 77 4.6 Extended Time, Change Management and Implementation Theory. 78 4.7 Leading the Changes of Management Reform 79 4.8 Factors in Models of Effective Management Reform 81 4.9 Perspective of Extended Time for Effective Reform Impact. 83 4.10 Relevance of Three Existing Theories to Implementation. 84 4.11 Linking those Three Factors Outside Implementation 88 4.12 Conclusions about Enhancing Implementation Theory 89 4.13 Framework of this Research 94

Chapter 5: Methodology 5.1 Introduction 97 5.2 Research Objectives 97 5.3 The Case Study 98 5.4 Rationale for Qualitative Single Case Study 98 Page | xii

5.5 Context: Demonstrating APS Program Performance 102 5.6 Potential Researcher Bias 103 5.7 Pilot Interviews 104 5.8 Interviewee Selection 105 5.9 Interviews Conducted 107 5.10 Triangulation 109 5.11 Data-Analysis Procedures 113 5.12 The Trustworthiness of this Research 118 5.13 Conclusions 120

Chapter 6: The Practice of Management Reform in the APS 6.1 Introduction. 121 6.2 Context: the APS Management Reform Series 123 6.3 Embedding Management Reform 124 6.4 Managing Change in the APS 127 6.5 Managing APS Reform over Extended Time for Embedded Change 129 6.6 Change Agents in Implementing APS Management Reform 133 6.7 APS Secretaries as Change Agents: Necessary but not Sufficient 135 6.8 Short-Term Responsiveness versus Long-Term Change Management 138 6.9 Influences of APS Devolution on Uniform Implementation 140 6.10 Influences of Time on Implementation 141 6.11 Implementation Theory Absent in APS Practice 143 6.12 Change Management in APS Practice is Short-Term 144 6.13 Applying a Lens of Extended Time to Implementation 146 6.14 Conclusions 149

Chapter 7: Discussing the Common Clues to Embedding APS Reform 7.1 Introduction 151 7.2 Embedding APS Management Reform 152 7.3 Implementing Evaluation not Long-Term 152 7.4 Intended Influence of the APS Centre on APS Secretaries 154 7.5 Managing Whole-of-APS Change 155 7.6 APS Secretaries as Agency Change Agents 159 7.7 Differences between Change Agents and Change Management 161 7.8 Change Management Frameworks, Geography and Extended Time 164 Page | xiii

7.9 Implementation Theory through a Lens of Extended Time 165 7.10 Revealing the Short-Term APS Reform Series 167 7.11 Implementation - Missing Factors of Momentum over Extended Time 170 7.12 Management Reform Missing Long-Term Effectiveness 172 7.13 Contributions to Theoretical Research Problem 174 7.14 Cross-Disciplinary Approach to Implementation Theory 175 7.15 Cross-Disciplinary Implementation and APS Practice. 177 7.16 Re-Framing Implementation Practice by Evaluation and Extended Time 178 7.17 Defining Performance Reform Success through Evaluation. 179 7.18 Information Systems for Effective Performance and Corporate Memory 180 7.19 Embedding Successful Change Requires Extended Time. 182 7.20 Gaps in Elements Linking Implementation 183 7.21 Limitations of this Research 185 7.22 Insights into Current Theory 185 7.23 Insights into Current Practice 187 7.24 Responding to the Four Research Questions 188 7.25 Insights into Cross-Disciplinary Linkages 190 7.25 Conclusions 191

Chapter 8: Embedding Public Sector Management Reform 8.1 Introduction 195 8.2 Reforming the Single APS through its Secretaries 196 8.3 Geographic Dispersion and Performance Information 197 8.4. Extended Time to Embed Effective Reform 198 8.5 Maintaining Reform Momentum through Change Consolidators 200 8.6 Management Reform not Evaluated 202 8.7 Contributions of this Study to Theory 203 8.8 Contributions of this Study to Practice 206 8.9 Conclusions 209

Page | xiv

APPENDICES

1. Departments of the Australian Public Service, at 26 August 2018. 211 2. Invitation to Participate 212 3. Participant Information and Consent Form 213 4. Background Information 219 5. UNSW Canberra Ethics Committee Approval 221 6. Interviewee Questions and Objectives 223 7. Interviewees and Personal Codes used in Categorising the Open Codes 224 from Interviews 8. Derivation of Open Codes from Main Quotes 225 9. Priorities of Secretaries and All Others 233

REFERENCES 234 - 270

Page | xv

Page | 1

Chapter 1 Implementing Management Reform to Change the Australian Public Service (APS) …. For theories of implementation to be useful to the practice of policy, they should draw on multidisciplinary insights specific to each case (Althaus & McKenzie, 2018).

1.1 Embedding Public Sector Management Reform This thesis explores whether program evaluation as a management reform became embedded in the APS. That 1980s reform was one of many management reforms implemented in the APS over the past thirty-five years, which chapter 2 summarises. Those management reforms which were collectively labelled ‘Managing for Results (MfR)’ (Keating, 1990) included devolution of central powers to individual agency Secretaries, creation of a senior management cadre (the Senior Executive Service, or SES) with changes to budgetary arrangements, people management and industrial relation frameworks, plus new machinery of government arrangements (Sedgwick, 1994). A critical element of that overall MfR was program evaluation, which “had a key role in linking program implementation and policy development” (Keating & Holmes, 1990: p.174). As such, the introduction and mandated use of program evaluation was intended to change APS agency management practice and culture, to focus on demonstrating the achievements and effectiveness of those agency programs.

A subsequent 1992 assessment of the overall MfR concluded “evaluation has now been incorporated into the culture of APS management” (TFMI:1992: p.26). This thesis is a case study of that MfR element of program evaluation, which was introduced in 1983, but no longer required in 1996 (Mackay, 2011). In this thesis, ‘Program evaluation’ will be used when referring to that element of the MfR. It is distinguished from the discipline of ‘evaluation’ as theory (e.g. Funnell & Rogers, 2011; Poland, 1974; Rogers, 2008). Utilising evaluation theory in this thesis draws upon its time-related factors of short, medium and long-term timeframes, in combination with the intended (reform) outcomes being designed into those three timeframes. Consequently, evaluation theory assists in assessing whether the implementation of management reforms was effective and successful over extended time.

Here extended time is used to examine how management reform change was initiated, managed over time and if it became embedded. ‘Embed’ has been used in APS practice and research, but at a single point of day-to-day practice without this meaning of extended time and permanence (APSC, 2013a; APSC, 2013c; Lindquist & Wanna, Eds, 2011; Newcomer & Caudle, 2011; ‘t Hart, 2011). An alternative of ‘stick’ was limited to the non-specific ‘long- Page | 2

term’: “Successful long-term policy making in the embedding phase sees the framework established in the building blocks phase deliver on long-term policy goals” (Ilott et al., 2016: p.56). From organization research, Pettigrew’s use of ‘longitudinal’ has been adapted in this thesis as ‘extended time’, to be a new factor in implementation theory and examining the period of the implementation of management reform in the APS as to whether those reforms lasted over this extended time. That program evaluation reform was no longer formally required in 1996, because of the different priorities of a newly-elected Government that were accompanied by a change in the Secretary of the Finance Department. The new Secretary focussed “the department more fully on accrual budgeting and the contracting out of government activities” (Mackay, 2011: p.17). It is now acknowledged that “no reforms have yet succeeded in embedding a performance focus into the workings of the Commonwealth public sector as a whole” (DoF, 2014: p.1). Effectively and permanently reforming the management of the APS represents challenges to both theory and practice that are addressed in this study. This thesis explores whether it was possible to implement management change over extended time, that is demonstrably effective in achieving management reform which becomes embedded in permanent public sector practice.

Current reform theory is not sufficiently comprehensive to address this factor of ‘embedding’. There are not “very robust conceptual models to understand public sector reform” (O’Flynn, 2015: p.21). In APS practice, the most senior APS public servant, the-then Secretary of the Prime Minister’s Department, confirmed reform implementation should not be a set and forget approach (Parkinson, 2016). As the Head of the APS (Hamburger, 2007) and Chair of the Secretaries Board1, this position is the leader of APS management reform. Reform is a current APS priority and a review is currently examining whether the APS is fit for purpose (Thodey, 2019; Turnbull, 2018). By initially using an implementation framework, this thesis examines past management reforms through a cross-disciplinary analysis based on change management, organisation research and evaluation theory. The lens of change management theory is applied to the key challenges of embedding the management reform of program evaluation in the APS.

Change is a characteristic of public sector management reform intended to have a permanent impact in practice (Aucoin, 1990; Ferndandez & Rainey, 2006; Keating & Holmes, 1990; Kettl, 1997; Lindquist & Wanna, 2011; Pollitt & Bouckaert, 2011). However, establishing the

1 Under S.64(2)(a) of the Public Service Act (PSAa) Page | 3

effectiveness of past performance reforms has been described as an act of faith (Pollitt, 1995) and rarely tested by evaluation (Breidahl et al., 2017). Several management reforms were introduced over the past thirty-five years to improve APS performance (Verspaandonk et. al, 2010) and there have been repeated calls to embed them in APS practice (APSC, 2007a; APSC, 2013a; APSC 2013c; MAB, 1993; MAC, 2010; Podger, 2004). A current alternative to ‘embedding’ is known as making a reform ‘stick’.

Both ‘embed’ and ‘stick’ emphasise the long-term and the permanence of policy and reform outcomes. There is now a recognised risk of the short-term displacing this long-term: “once a long-term policy has been established, government needs to mitigate the risk that focus is lost in moments of political transition” (Ilott et al., 2016: p.7). The factor of ‘stick’ is of increasing research interest (Aberbach & Christensen, 2014; Hood & Dixon, 2015; Hughes, 2016; Lindquist & Wanna, 2011; O’Flynn, 2015; Pal & Clark, 2015) and highlights that there is an absence of ‘embedding’ in public sector implementation theory. In practice, repeated calls for embedding public sector change also suggest there is a recurring gap between commencing the implementation of a management reform and ensuring that its outcomes endure in (APS) practice, or ‘stick’. This thesis draws upon insights from examining those gaps between implementing management reform in the short-term and failures over extended time to embed its changes in permanent public sector practice.

This thesis also considers the factor of extended time in implementation theory. This concept of extended time was adapted from longitudinal organisation research identifying the need to link the “content, contexts and processes of change over time, to explain the differential achievement of change reform objectives” (Pettigrew, 1990: p.268), rather than taking a single snap-shot examination of the research subject at a single point in time. By adapting the meaning by Pettigrew (1990) of extended time to implementation theory, this study links the factors of implementing public sector change over time with the extended time needed for implementing and achieving management reform objectives. Both of the elements of change over extended time and achievement over extended time are considered relevant to explaining implementation success or failure in public sector management reform.

1.2 Introduction to APS Management Reform The background to this thesis is the Australian context for public sector management reform. Its origins are in the Royal Commission into Australian Government Administration (RCAGA, Page | 4

1976). Royal Commissions in Australia are the means of showing to the public that “a government is taking an issue seriously” (Prasser, 2006: p.34). As it had been more than fifty years since the APS had been reviewed, the RCAGA had been established in 1974 by a new federal Government “to adapt the national public administration to the needs of contemporary government” (RCAGA, 1976: p.3). That Royal Commission had concluded: “sensible judgement about changes necessary cannot be made unless the administration is studied and judged in the context of the Australian system of government. This system has traditionally been identified as an example of the Westminster system. The Commission has become increasingly conscious of the degree to which the Australian system in fact differs from the Westminster system and of the significance for the administration of such differences” (RCAGA, 1976: p.11). This also provides a unique Australian context for implementation theory.

The later MfR drew upon unimplemented proposals from that 1976 RCAGA. This was due to the newly-elected 1983 Government’s policy and high priority of “reform of the machinery of government…. to ensure [the APS] functions at the highest possible level of efficiency” (Hawke2, 1983:iii). Other elements of that MfR as it was implemented to 1993 are noted in Figure 1 of Chapter 2. The effectiveness of those managing for results reforms would be demonstrated by program evaluation, which was a “crucial element of the system of managing for results and had a key role in linking program implementation and policy development” (Keating & Holmes, 1990: p.178). This linked policy, planning, implementation and evaluating eventual program results and is background to examining the Australian system of government and public administration. This desired link between government and program results is being repeated in a current management reform.

A current APS management reform shows similarities with the MfR era of program evaluation. The Public Governance, Performance and Accountability (PGPA) Act 2013 (Finance, 2014) mandates that agency heads must demonstrate their agency’s non-financial performance, with the goal of improving “the standard of planning and reporting for Commonwealth entities, especially in relation to the management of their affairs and the delivery of programmes and services to the public. A strong performance management framework, with a focus on

2 RLJ Hawke was Prime Minister of Australia between 1983 and 1991.

Page | 5

reporting results [emphasis added], is critical to achieving this goal” (Finance, 2014: p.1). By linking senior management with program results (Barrett, 2014) and “Evaluators and the Enhanced Commonwealth Performance Framework” (Morton & Cook, 2018), the objectives of the PGPA Act are similar to the 1980s reform of ‘managing for results’. The gap of thirty years links extended time (Pettigrew, 1990) and “making reform stick” (Pal & Clark, 2015). Both reforms intended to ensure the APS was able to achieve results in Government programs.

Achieving results in implementing the PGPA Act was reviewed in 2018. That review of the Act’s implementation found the APS was making progress on its non-financial performance, but that these changes were proceeding too slowly and that stronger change leadership was needed (Alexander & Thodey, 2018). There were links between ‘time’ and top-down implementation in the APS being examined in this thesis. Further concerns have centred on whether the Act will be effective (Barrett, 2016, 2017) and, more fundamentally, can it work (Podger, 2018)? This raises two issues underpinning this research. The first is whether these concerns about implementation are ones of design. The second is whether there has been an ongoing implementation problem, which (unless overcome) will lead to the 2013 reform being no better than earlier ones that failed to change the APS and improve government outcomes. The following changes in APS leadership and managerial responsibilities between the 1980s and 2013 provide the context for this study. They highlight the current significance of leading change for achieving effective implementation of management reform.

1.3 Managerial Responsibilities of APS Secretaries Have Changed since 2013 For one hundred and ten years, the responsibilities of APS Secretaries remained generic and did not mention ‘performance’. This section draws out the links between the MfR era and the PGPA Act, by summarising the changed responsibilities of Secretaries that were made specific in 2013. Under the first Commonwealth Public Service Act 1902, “The Permanent Head of a Department shall be responsible for its general working, and for all the business thereof, and shall advise the Minister in all matters relating thereto” (CPSA, 1902: p.6). This meant managing their departments and implementing general government policy (Caiden, 1967). There were similar responsibilities in the replacement Public Service Act, 1999 and Section 57(1): “The Secretary of a Department, under the Agency Minister, is responsible for managing the Department and must advise the Agency Minister in matters relating to the Department” (PSAb). It is notable those responsibilities emphasised the process-orientations of ‘general Page | 6

working’ plus ‘advise’ (1902, 1999) and ‘managing’ (1999). Absent from those Secretaries’ tasks were any emphases on performance, leading change and driving reform.

The process of driving reform and its management has a significant impact on the effective implementation of reform programs. Top-down management by APS Secretaries requires “senior management involvement and/or oversight to deliver the required results” (Barrett, 2002: p.27). These top-down responsibilities of an agency head for focusing on results across the APS and achieving outcomes are now explicit in two Acts. First (as amended in 2013), Section 57 (2) of the Public Service Act 1999 mandates a Secretary’s responsibilities are to: “(a) manage the affairs of the Department efficiently, effectively, economically and ethically; (b) advise the Agency Minister about matters relating to the Department; (c) implement measures directed at ensuring that the Department complies with the law; (d) provide leadership, strategic direction and a focus on results for the Department” (PSAc). In addition to a Secretary being a policy advisor to Government (Podger, 2007; Tiernan, 2011), these are explicit managerial roles of achieving an agency’s outcomes and focussing on results.

This priority of achieving agency outcomes was reinforced in the PGPA Act. Section 15 specifies: Duty to govern the Commonwealth entity: “(1) The accountable authority [the Secretary] of a Commonwealth entity must govern the entity in a way that:..(b) promotes the achievement of the purposes of the entity;...”. As well as those single agency responsibilities, Secretaries are expected to have whole-of-APS skills. These are management skills relevant to this thesis and concern the “successful implementation of policy initiatives, demonstrating leadership by example, so as to promote co-operation and avoid territoriality” (ANAO/DPM&C, 2014: p.11). This requires a wider, whole-of-APS management perspective in individual Secretaries and highlights their joint roles in implementing effective management reform across the departments of the single APS. APS Secretaries are now responsible for the implementation, delivery and achievement of policy and reform, both in their agency and collectively across the single APS. These are dual management responsibilities.

Those responsibilities are under-pinned by public sector leadership. APS Secretaries are now required both to lead and change their agencies (Grube, 2011; Hamburger, 2007; Podger, 2009). However, there have been differences in practice between the three stages of commencing the implementation of a management reform, calling for its embedding and later evaluating the longer-term achievement of any embedded outcomes. O’Flynn has concluded Page | 7 that “we are poor at evaluating reform both theoretically and practically; indeed, our lack of attention to evaluation must be one of the great collective failings of public administration” (O’Flynn, 2015: p.19). This raises two questions as to whether APS leadership is a part of the problem, despite the legal obligations in the above two Acts. Can APS Secretaries, in practice, implement management reform requiring demonstrable change, that is effective and endures throughout their agencies both now and into the future? Are there bases for effective management reform in practice in current implementation frameworks?

1.4 Are Current Implementation Frameworks Limited? There are challenges to current implementation frameworks from reforming APS management practices. These challenges relate both to developing the effectiveness of the underlying change management processes and to demonstrating that the reforms achieved the desired long- term outcomes. Australian public sector management reforms and their implementation have been subject to extensive research, which includes the MfR (Bartos, 2003; Guthrie & English, 1997; Yeatman, 1987), the impact of the centre versus devolved management to agencies (Halligan, 2007), New Public Management, as MfR came to be known (Hood, 1991; Hood, 1995; Hughes, 1992; Hughes, 2012), making reform stick (Lindquist & Wanna, 2011; Lindquist & Wanna, 2015), or why reform doesn’t stick (O’Flynn, 2015), reviewing what long- term management reform has achieved (Pollitt & Bouckaert, 2011) and the implementation of the PGPA Act (Barrett, 2014; Barrett, 2017; JCPAA, 2017; Rana et al., 2019). That research has been complemented by practice-based analysis.

The practitioner analysis of MfR alone resulted in much contemporary literature. This included Dawkins (1985), Holmes & Shand (1995); Ives (1994), Keating3 (1989; 1990; 1995), Mackay4 (1994) and Sedgwick5 (1994). Two of them were later to note the absence of program evaluation (Mackay, 2011; Sedgwick, 2011a) and their perspectives of fifteen years (1994- 2011) illustrate the value of long-term analysis adopted in this thesis. Otherwise, past research on APS management reforms was mainly commentary (e.g. Halligan, 2005; Lindquist, 2010). Only recently has the outcome-focussed question been asked: “How might agency leaders

3 Dr Michael Keating was Secretary of the Finance Department (1986-1991), then promoted to Secretary of the Prime Minister’s Department (and APS Head) between 1991-1996. These were the ten years of MfR reform. 4 Mr Keith Mackay was the Senior Executive Service (Level 1) Head of the Evaluation Branch in the Finance Department, during those MfR years. 5 Mr Stephen Sedgwick was Dr Keating’s successor as Secretary of the Finance Department (1992-1997) and was later appointed as the Australian Public Service Commissioner (2009-2014). Page | 8

know policy implementation has succeeded?” (Lindquist & Wanna, 2015: p.232). That emphasis on success represents a current development in implementation frameworks. This thesis contributes to enhancing those frameworks through insights from three further theories. Implementing reform means managing changes in public sector practice, by drawing on change management frameworks. Because of the size and dispersion of the APS discussed in chapter 2, such changes need to be managed over an extended period of time to be effective. Finally, utilising evaluation theory assists in establishing management reform effectiveness and success over time. Their relevance to current implementation frameworks follows.

Frameworks for implementing public sector management reforms have been extensively discussed. There has, however, been little agreement on research directions. These directions range widely from a focus on top-down reform (Van Meter & Van Horn, 1975), achieving policy objectives (Sabatier & Mazmanian, 1979), bottom-up direction (Lipsky, 1980), the lack of a theoretical basis (Matland, 1995), a theory-practice split (O’Toole, 2004), whole-of- government framework (Christensen & Laegreid, 2006), improving implementation by organisational change (Wanna (Ed), 2007), research being rated at a mature level (Saetren, 2014), cross-sectional analysis versus that over extended time (Hupe & Saetren, 2015). This variety of directions supports the conclusion that: “the study of policy implementation within the policy sciences remains fractured and largely anecdotal” (Howlett, 2018: p.1). Implementation research may have stagnated by focussing on a dichotomy of success/failure and not drawing on cross—disciplinary insights (Althaus & McKenzie, 2018). By examining Australian implementation practice through a cross-disciplinary framework of change management, extended time and evaluation and identifying their insights, this case study contributes to a greater understanding of implementation research.

One of these cross-disciplinary contributions is from evaluation theory. By comparing management reform objectives and actual outcomes, evaluation contributes to assessing reform effectiveness. Although the APS governance structure called for reform to be embedded (MAB, 1993; MAC, 2010), there is no evidence management reform has resulted in permanent change (Aberbach & Christensen, 2014; Finance, 2014; O’Flynn, 2015). According to O’Flynn “perhaps one of the central, and most confronting, questions for public management and administration practitioners and scholars is why reform attempts so often seem to fall short of their declared ambitions” (O’Flynn, 2015: p.19). This is one of the challenges in public Page | 9

sector implementation practice: the gap between the nominal intention (of initiating a management reform) and any real outcome (demonstrable impacts in the long-term).

A reason for this gap may be the differences over time between beginning the reform and managing the ongoing changes of management reform. These differences relate to sustaining those changes being implemented long enough for them to be embedded. Public sector reform means “deliberate changes to the structures and processes of public sector organisations with the objective of getting them (in some sense) to run better” (Pollitt & Bouckaert, 2011: p.2). This links the intentions of a management reform with changes to improve an agency’s operation. As recently proposed by Shannon (2017) and van der Voet et al., (2016b), a wider perspective on effective reform change may result by evaluating existing implementation theory through a change management perspective. Change management theory contains a final stage considered to be relevant to implementation theory and embedding.

A widely discussed change management framework has been adopted in this thesis. This framework of Kotter (1995) is considered to be an exemplar in its field (Mento et al., 2002) and a “key reference in the field of change management” (Applebaum et al., 2012: p.765). Kotter’s model of change management has eight steps. These are: create urgency; form a change team; create a vision; communicate that vision; remove obstacles to change; create short-term wins; consolidate initial changes; with the last being institutionalisation: anchoring such change in permanent practice (Kotter, 1995). Elements of Kotter’s framework have previously been applied in two areas of the Australian public sector. The first was by the initial Chief Executive Officer of Centrelink between 1997 and 2004, in amalgamating welfare functions from two agencies in “the Centrelink Experiment” (Halligan & Wills, 2008), although the last stage of embedding was not achieved (Yeatman, 2009). The second was in changing the governance culture of the (former) Immigration Department in 2007 (Metcalfe, 2007). While remaining widely cited6, Kotter’s framework has not been widely taken up in the public sector7. It has also been absent in APS practice on change management.

A recent central guidance on “Empowering Change” (MAC, 2010) did not reference Kotter. The earlier comments by the 1976 Royal Commission (about an Australian context and system

6 Two-thirds of citations of Kotter have occurred in the past nine years (Web of Science analysis on 18//7/19). 7 3.6% of all citations have been in “public administration”, with 58% being in “Management” and “Business”. (same Web of Science analysis on 18//7/19). Page | 10

of public administration) were reiterated recently on change management in the APS: “In establishing change management practices for the APS, there is a need to take into account the unique nature of change and operations management in the public context” (APSC, 2014c: p.88). This provides a long-term Australian context for this research, reinforced recently by the central agency implementing the current PGPA Act, the Finance Department: “No reforms have yet succeeded in embedding a performance focus into the workings of the Commonwealth public sector as a whole. While there are individual Commonwealth entities that do provide examples of better practice that others can aspire to, there is scope for improvement at a whole- of-government level” (Finance, 2014: p.1). The last stage of Kotter on ‘institutionalisation’ has not been achieved in APS management reforms and this thesis contributes to exploring why frameworks for achieving embedded outcomes may or may not be achieved.

Implementing public sector reform also means change. Managing those changes is intended to achieve permanent outcomes (Armenakis et al., 2000). By drawing upon public management reforms of thirteen countries, Pollitt & Bouckaert (2011) proposed a general reform framework with three stages: “reform contents; implementation process; results achieved” (Pollitt & Bouckaert, 2011: p.33). That omitted any further stages in that ‘implementation process’ or assessments of ‘effectiveness’, since the ‘results achieved’ may not correspond with ‘reform intentions’. It was notable that the focus on change in their definition of reform as ‘deliberate changes to public sector organisations’ (Pollitt & Bouckaert: 2011: p.2) was not incorporated in that framework of Pollitt & Bouckaert (2011). This thesis examines whether reform implementation models can be expanded by including the active management of those changes.

Kotter’s model includes permanent change, which was not discussed by Pollitt & Bouckaert (2011). Pollitt & Bouckaert’s framework is also silent on two factors: the active management of those changes, over extended time, between commencing implementation and ensuring that the reforms have achieved their intended results. Both in practice and theory, transformational public sector management reform requires organisational and staff changes (APSC, 2014a; Briggs, 2007a; van der Voet, 2014). This indicates that a fruitful development addressing this limitation includes re-framing implementation as change management (Shannon, 2017; van der Voet, 2016), that engages middle managers (Buick, et al., 2017) and maintains reform momentum (Barker et al., 2018; Beck et al., 2008). This also recognises that reforming the management of public sector agencies involves leaders engaging with their staff at all levels, including direct supervisors. In the APS, this requires extended time. Page | 11

Changing the large and dispersed APS requires extended time to maintain reform momentum. The reasons are set out in chapter 2, which explores the APS structure and its locations. They concern the size of the APS (over 150,000 people), with most of its staff located in the six States and Northern Territory away from the Secretary’s direct influence. Because “leaders need to demonstrate active use of performance information in decision making” (Moynihan & Ingraham, 2004: p.445), information on reform progress and outcomes in those regions should be reported by those regional staff to management in Canberra. This is now required under the PGPA Act (Finance, 2014). While the place of such management information systems has long been recognised (e.g. Bozeman & Bretschneider, 1986), the value of its flow depends on top-level use of that information (Kroll, 2015; Moynihan & Pandey, 2010). This information to senior management in head office (Canberra) thus supports maintaining reform momentum. The significance of maintaining reform momentum over extended time is illustrated by the case study of this thesis: evaluation in the MfR reform. Begun in 1983, MfR was evaluated as implemented by 1992 (TFMI, 1992) but, even after those ten years, further work was still required to embed it throughout APS practice (MAB, 1993). That recommendation by the APS’s central Management Advisory Board demonstrated that the MfR reform was intended to result in permanent changes in APS management practices and its working culture (‘embed’). Whether that further work on embedding was undertaken was not evaluated.

Part of that reformed management practice was program evaluation. By linking policy and program implementation with outcomes, evaluation was regarded as “a crucial element of the system of managing for results” (Keating & Holmes, 1990: p.174) and APS accountability to Parliament (Coates, 1992). Program evaluation was central to the 1980s reforms of APS accountability (Guthrie & English, 1997). This priority was downgraded after only ten years, when program evaluation was no longer required in 1997 by an incoming change of Government and Finance Secretary (Mackay, 2011). By contrast, management reform in the United Kingdom has been examined over longer periods of twenty years (Bovaird & Russell, 2007) and thirty years (Hood & Dixon, 2016). This thesis introduces the factor of extended time over several decades, to explore whether an implementation framework over that extended time is needed to deliver enduring management change across the APS.

1.5 Change over Extended Time Effective reform in public sector organisations means change over extended time. A long- standing criticism of organisation research was its being based on a one-off snapshot of a single Page | 12 event, rather than exploring organisational change over extended time (Pettigrew, 1990). The place of extended time in strategic change research has, however, been limited: “relatively few studies have adopted a fine-grained, longitudinal approach required for fully explicating process dynamics” (Kunisch et al., 2017: p.1006). By examining the implementation of similar APS management reforms separated by twenty-nine years (MfR of 1984 and the PGPA Act, 2013), this study includes those criticisms. By studying fields of change management and elements of extended time, plus evaluation theory to measure reform effectiveness, there may be useful insights for the future of public sector reform practice and enhancing implementation theory. This study of implementing APS management reform and assessing current theories of implementation is framed by the following aims and research questions.

1.6 Study Aims and Research Questions There are two aims: to review the potential contribution of change management in the public sector, as a factor in the present frameworks of management reform implementation and to examine if the changes started by reform can endure long enough to become embedded. There are four research questions: whether this main element of the MfR management reform that involved program evaluation became embedded in the APS? What is the role of public sector change agents in embedding APS management reform? How can change management frameworks explain the challenges of implementing APS reform policies? What insights might be learnt from applying a lens of extended time to implementation theory and to examining how reforms endure? These questions are important as public sector reform implementation is undergoing a theoretical re-assessment. Shannon has argued “for changing the conceptual lens, from public sector reform to change management” (Shannon, 2017: p.6). This begins to re- frame management reform from its initiation to guiding the subsequent changes over the long term. The background to setting this research in practice follows, with information on the objectives and structure of the APS.

1.7 Research Setting: The APS The objectives of the APS are set in law. It is “an apolitical public service that is efficient and effective in serving the Government, the Parliament and the Australian public”8. As demonstrated by the singular uses of “an” and “is”, the APS is regarded as a single entity. This means that implementing management reform is intended to be uniform throughout the APS

8 Public Service Act 1999, s.3. At https://www.legislation.gov.au/Details/C2019C00057 Page | 13

as a single entity, rather than each of its individual eighteen departments and one hundred and sixty-three other agencies (Finance, 2018a). A summary follows of the structure of the single APS and its staff.

At 30 June 2018, the APS consisted of 150,594 staff. Sixty-two per cent of them work in offices geographically located in the States and Northern Territory and thirty-eight per cent are located in Canberra (APSC, 2018a), where the head offices of agencies and their Secretaries are located (Beer, 2009). Seventy-two per cent of its senior managers (the Senior Executive Service) are also located in Canberra (APSC, 2018b). The APS is large and geographically distributed, which challenges the ability of a Secretary and agency senior managers to implement management reform uniformly from Canberra throughout the APS regional offices. Because the uses of ‘embed’ which follow have been repeated in approaches to APS reforms for twenty-six years, this thesis examines the differences between ‘implement’ and ‘embed’.

This research assesses the use of ‘embed’ in those reforms. Over the past twenty-six years, ‘embed’ has been a theme repeated in practice and research as a desirable outcome of APS reform. It is, however, more notable for being used in a prospective, future-focussed sense than a demonstrable result which was achieved. The following seven extracts exemplify those continuing uses by the APC centre and the contexts of APS culture and practice, most clearly in the implementation of reform priorities. 1. “there is, however, an urgent need to press home the changes and embed them more firmly in the working culture of the Public Service” (MAB [Management Advisory Board], 1993, p.v); 2. “The commission has put a lot of effort recently into clarifying the APS values and in helping agencies understand what they need to do to embed them into their culture and get beyond rhetoric” (Podger [Australian Public Service Commissioner], 2004: p.14); 3. “A central responsibility of APS leaders is to ensure that sound governance policies and practices are embedded in their agencies” (APSC [Australian Public Service Commission], 2007a: p.iii); 4. “The MAC [Management Advisory Committee] Executive believes strongly embedding a culture of innovation within the APS is a vital component of that rejuvenation” (MAC 2010: p.iv); 5. “The Blueprint sets an ambitious and interlinked reform agenda that seeks to improve services, programs and policies for Australian citizens. Above all, it recognises that to be Page | 14

strong, the APS must make the most of the talents, energy and integrity of its people. The proposed reforms therefore seek to boost and support the APS workforce and leadership, and to embed new practices and behaviour into the APS culture” (AGRAGA, 2010: p.xi); 6. “The plan describes the outcomes that we can expect for the APS as a whole when these building blocks are embedded and enhance the confidence and trust of the Australian people in the public service” (APSC, 2013c: p.1). 7. “It [evaluation] provides a strong basis for decision-making on whether to maintain or modify programs, and, by being embedded in the learning framework of departments, should enhance policy advice and program development and implementation. (ANZSOG, 2019: p.9). Management reform or change was intended to be more than a process of initiating reform and has been meant to result in a permanent change of APS practice (‘embedded’). Two recent examples from research and practice support this assessment. Carey et al have recently concluded that governments need “to plan longitudinally and monitor the effects of current policy settings” (Carey et al., 2019: p.3). The Commissioner of the Australian Federal Police recently acknowledged how hard it has been to convince his six thousand staff to change (Colvin, 2019). These examples demonstrate a repeated senior management mindset at the centre seeking permanent reform.

Some reforms were embedded. Two were the creation of the senior management strata, the Senior Executive Service in 1984 (APSC, 2009) and the introduction of accrual accounting in 1998 (Guthrie et al., 2003). The former was permanent by being established under S.35 of the Public Service Act9 and the latter was an APS-wide industry accounting standard. The otherwise repeated calls to embed reform identify two factors in APS implementation practice: those objectives are always future-focussed as being intended outcomes, but no system has been proposed for achieving such an outcome of change in the single APS.

Change is a constant factor in APS operations. Some of those change and implementation challenges have been recognised at the top: “Increasingly though, it is becoming apparent that incremental change is not the option that will best equip the APS to meet the challenges of the future. Rather, change of a transformational kind is required—not just in what the APS does on behalf of government but also in terms of how it manages itself. Both external and internal

9 At https://www.legislation.gov.au/Details/C2019C00057, accessed 11 June 2019. Page | 15 drivers for reform are aligned” (APSC, 2014a: p.3). “Earlier this year, for example, the Commission identified human capital challenges for the APS that are seemingly intractable, given the slow progress in addressing them. These are strengthening the capabilities of many agencies to manage risk, change and performance” (APSC, 2014a: p.6). This means that enhanced staff capabilities to manage transformational change are now a central priority and their absence may have been contributory factors in the failures of past management reforms.

These failures to embed performance reform in the APS were recently acknowledged (Finance, 2014). The evidence for this conclusion was not stated and there have also been pressures to improve the policy success of the APS (Shergold, 2015) and better connect policy with its implementation (Parkinson10, 2016). The-then Australian Public Service Commissioner, John Lloyd, also recently identified that “Change is a constant in public service and leaders have to assess the potential impact of change and find ways to support their agencies to prepare and adapt” (Lloyd, 2017a: p.3). This identifies APS managers as leaders of change and implementation agents of any findings from the following APS review announced in 2018.

The APS is again being reviewed as to whether it is fit for purpose. As announced in 2018, Prime Minister Turnbull stated: “Our APS must be apolitical, professional and efficient. It needs to drive policy and implementation, using technology and data to deliver for the Australian community. Many of the fundamentals of Australia’s public sector in 2018 reflect the outcomes of a Royal Commission held back in the mid 1970s [RCAGA, 1976]. It is, therefore, timely to examine the capability, culture and operating model of the APS, to ensure it is equipped to engage with the key policy, service delivery and regulatory issues of the day” (Turnbull, 2018). The 1976 Royal Commission was the origin of the ‘Managing for Results’ era, which provides a link with this current review and further context for this thesis.

Since 1976, embedding management reform has remained a constant challenge for the APS. This provides background to the current management and performance reform, the PGPA Act, where implementation is not proceeding fast enough and requires upgraded leadership (Alexander & Thodey, 2018). An interim report in 2019 on the current review of the APS highlighted the current relevance of this thesis by combining both ‘implementation’ and

10 At the time of writing, Dr Parkinson was the Head of the APS in 2019, as the Secretary of the Department of Prime Minister and Cabinet (https://www.pmc.gov.au/who-we-are/the-secretary - accessed 10/1/19) Page | 16

‘embedded’: “Implementation is key. Some of these priorities have been recommended in past reviews but have either not been fully implemented or their original intent has not been fully realised. And taken individually, no single idea is sufficient to drive meaningful change in the APS. The APS will be fundamentally transformed if: the complete set of initiatives is taken forward as an integrated package; they are owned and embedded across the APS” (Thodey, 2019: p.17). These two recent reviews and the questioning whether the APS is fit for purpose highlight issues of public sector practice, relating to effective implementation, structural change management and embedding permanent management reform. They support this research into managing public sector management reform for long-term and effective change.

1.8 Thesis Overview This thesis is titled “The Secretary Said: Make it So”. It challenges assumptions APS leaders can implement reform throughout the APS by diktat: that once announced, then change occurs. This first chapter has identified that, despite the espoused importance of reform in creating an effective public service, ongoing waves of management reform in the APS have failed to gain traction. The chapter suggests that although management reforms involve changes, aspects of implementation theory are yet to include change theory, in particular the concepts of change over extended time and later evaluation of the effectiveness of those changes. This chapter identified that the case study for this thesis will be the main element of the ‘Managing for Results’ management reform, which was program evaluation: introduced in 1983 and no longer required in 1996 (Mackay, 2011). Research questions are presented and then an overview of why the APS is a current context for this research into implementation. It concludes with a summary of two recent reviews of the APS and its management, that identify the present practice priorities of change, implementation and embedding to which this thesis contributes.

Chapter 2 describes the structure of the APS and establishes the context of multiple APS reforms started in the past thirty-five years. These reforms were a mixture of management frameworks, legal and financial requirements such as accrual accounting, and structural initiatives such as the formation of the Cabinet Implementation Unit in the Department of Prime Minister and Cabinet. This chapter provides background on the reform series and some of the lessons from implementing APS management reform, including their long-term impacts. It sets out the management challenges of implementing a reform uniformly across a single APS.

Page | 17

Chapter 3 examines theories of implementation and public sector reform. It makes a more extensive consideration of the theory-practice gap, by examining the place of change management and evaluation. The place of measuring performance and outcomes is reviewed, as assessed in theory and practice. This chapter establishes that the factor of extended time has been absent in current models of implementation and change and there is developing interest in this lack of theory connecting reform implementation and the eventual long-term outcomes of that reform. There is no single framework for implementing public sector management reform that proceeds long enough to be demonstrably effective in achieving embedded reform.

Chapter 4 outlines the framework for undertaking this research into embedding APS reform. It initially draws upon the theory and practice of “Longitudinal Field Research on Change” (Pettigrew, 1990) then examines current theories of reform implementation. From its origins in organisation research, Pettigrew’s theory suggests a focus on extended time may complement an enhanced theory of implementation and theories of change management and evaluation. By adapting elements of reform, change management and its agents, extended time and evaluation, this research potentially contributes to re-framing implementation theory.

Chapter 5 outlines the choices and implementation of the methodological design of this thesis. It explains how the research was undertaken using qualitative research through a single case study, starting with two pilot studies. The means of selecting thirty-two interviewees, together with the subsequent analyses of these interviews, are set out. Potential insider/outsider bias is discussed in the context of my professional background, as a member of the APS throughout the thirty-five year period of reform practices which are under review. As an insider, I had contributed to other policy reforms, which benefited my understandings of the APS reform series and actual implementation practice. These practical understandings were mediated by my longstanding outsider memberships of the Institute of Public Administration Australia and the Australian Evaluation Society, which provided me with alternative professional acculturations and perspectives on the workings of the APS.

The workings of the APS were derived from interviews with key participants in the APS evaluation reforms of the 1980/90s. These established how they were implemented and whether any theory was utilised. These interviews were cross-referenced with subsequent analyses at that time by both researchers and practice-based commentaries. This analysis also Page | 18

included reforms currently requiring performance to be demonstrated, contributing new knowledge by extending implementation and change theories to the practice of embedding.

Chapter 6 examines the practice of management reform in the APS. It sets out the experiences of various reforms in the APS as understood by the participants in the interviews. The interviewees provided findings from a variety of perspectives: from practice, research and combined insider/outsider involvement in APS reform. The latter interviewees were those implementing both past reform and currently. The interviewees reflected three levels of public sector systems: from the (macro) top-level reform initiators, agency heads and senior managers; (meso) mid-levels implementers; (micro) central agency monitors and evaluators (internal and external), plus additional external public sector researchers and consultants. Those responses produced forty-three open codes and nine themes which are more-closely analysed.

Chapter 7 explores the practice of APS reform, based on the interviews. It establishes the significance of the research findings and identifies insufficiencies in the factors of implementation frameworks. It contrasts current criticisms of implementation theory with APS practice, for identifying new knowledge addressing the research-practice gap. This focusses on the place of extended time, change theory and a model of effectiveness, or evaluation. These are currently separate frameworks. This chapter also placed the research findings in the context of recent concerns about APS institutional amnesia and the lack of long-term ability by APS leaders to demonstrate effective performance.

Chapter 8 reviews the theoretical research problem and generalises from the research findings. It considers the implications for current and new knowledge on enduring public sector reform. The chapter provides both a reflexive account of the problems encountered in developing this thesis and also of the quality of its conclusions. Finally, it draws together the findings and considers whether enhancing the paradigm of management reform implementation is possible.

1.9 Conclusion Embedding management reform of the public sector remains a challenge. This is evident in both theory and practice, in relation to the absence of a framework for implementing and ensuring that the changes of public management reform become embedded. One research stream has recognised the existence of the repeating reform series (e.g. Hood & Lodge, 2007; Jones & Kettl, 2003; Pollitt, 2013; Wettenhall, 2013), but without examining the implications Page | 19

of this repetition for implementation theories. The APS has been subject to a series of reform and the 1980s “managing for results” reform is similar to current APS requirements for agencies to demonstrate their non-financial performance under the PGPA Act. In the similar concerns with “whether a Commonwealth entity is delivering on its intended results” (Finance, 2017f: p.3), each management reform has focussed on the results and effectiveness of the APS.

This thesis compares their respective implementation strategies, for their contributions to potential enhancements of implementation theories. This chapter has identified that this enhancement may include features of change management theory, adapted to the public sector from its private sector origins. The Secretary as a potential reform change agent was identified. The next chapter provides a background to the APS. It sets out how the APS is structured, where it is geographically located throughout Australia, the nature of the management reforms to it since 1984 and the specific place of program evaluation as a means of assessing implementation effectiveness between 1984 and 1996. The chapter gives examples of practice- based reforms that are considered to exemplify the reform series and the challenges to embedding enduring reform in the APS.

Page | 20

Chapter 2 Management Reform and the Australian Public Service

2.1 Introduction to the Australian Public Service (APS) Chapter one outlined the challenges to implementing management reforms that drive to change in the APS. This chapter summarises the timelines of past APS management reforms and their impacts on APS effectiveness; it then outlines the APS structure and its management. That provides a detailed background to the management reforms since the early 1980s, the discussion on embedding reform and summarises the three research fields of change management, organisation research and evaluation contributing to this chapter. Its main purpose is as background to the interviews conducted for this case study.

2.2 Timelines of APS Management Reform and their Impacts The timelines of management reforms between 1983 and 2013 are summarised in Figure 1. Figure 1 Timelines of Reforms of the APS: 1983-2013 Year Main Issue

1976 Royal Commission on Australian Government Administrations (RCAGA). Hawke Government statement on 1983 APS efficiency, effectiveness, equity, responsiveness. 1984 Financial Management Improvement Program (FMIP: Finance Dept) 1984 Creation of Senior Executive Service (SES) 1987 Public Service Board replaced by Public Service Commission 1987 Management Advisory Board (MAB) established. 1987-88 Introduction of annual agency Evaluation Plans

Formation of Management Improvement Advisory Committee 1989 (as sub-Committee of the MAB) 1990 APS requirement for Portfolio Evaluation Plans (Finance Dept) 1992 Evaluation of Decade of Reform (FMIP & “managing for results”) 1993 Introduction of APS core staff competencies (Public Service Commission) 1997 Customer Service Charters. Introduction of Financial Management & Accountability (FMA) Act 1998 and Commonwealth Authorities & Companies (CAC) Act 1998 Introduction of Accrual Accounting. 1999 New Public Service Act 1999 Senior Executive Leadership Capability Framework. Establishment of Cabinet Implementation Unit (CIU) 2003 (in Department of Prime Minister & Cabinet - PM&C) 2007 Secretaries’ contracts extended, from three to five years. 2008 Secretaries’ performance bonuses terminated. 2008 Operation Sunlight: tighten outputs and outcomes framework 2010 Ahead of the Game APS reforms announced by Government. Current Public Governance, Performance and Accountability Act (PGPA) replacing the 1998 FMA/CAC Acts, including a new requirement for 2013 Annual Performance Statements (in addition to the Annual Report). 2015 CIU abolished (but functional responsibility retained by PM&C) 2018 Then-Prime Minister Turnbull’s Review: is the APS Fit-for-Purpose?

Sources: AGRAGA,2010; ANAO,1991; Finance,2014; Gold,2017; Guthrie et al.,2003; Ives,1992; Kemp,1999; NLA,2017; Tanner,2008; Turnbull,2018; Verspaandonk et al.,2010; Wanna,2006.

Page | 21

As will be discussed throughout this chapter, these reforms also illustrate the reform series (Hood & Lodge, 2007; Pollitt, 2013). Illustrating the current relevance of this thesis, there is a direct link between those reforms originating in 1983 and the current APS review.

The APS is currently being reviewed as to its fitness for purpose. In announcing this review on 4 May 2018, Prime Minister Turnbull noted “[M]any of the fundamentals of Australia’s public sector in 2018 reflect the outcomes of a Royal Commission held back in the mid 1970s” (Turnbull, 2018). A brief background on that Royal Commission (RCAGA, 1976) was set out in chapter 1 and greater detail follows. The Report of that RCAGA was effectively ignored by the federal government in office between 1975 and 1983, but became the background to the means of an incoming Government in 1983 to reform the effectiveness of the APS in achieving program results (Hawke, 1983; Parker & Guthrie, 1999). Those earlier RCAGA reforms were implemented by the 1980s change management process of the Financial Management Improvement Program (FMIP), also known as Managing for Results or MfR (Keating, 1990; SCFPA, 1990). As one element of MfR, program evaluation was the link between the public policy cycle and its effective implementation, where “outcomes are subject to evaluation” (Bridgman & Davis, 2003: p.102; Keating & Holmes, 1990). The evaluation of that MfR reform after ten years concluded APS culture had been changed, but further work was required to embed those changes in APS practice (MAB, 1993; TFMI, 1992). Program evaluation was no longer required in 1997, but has been recently recommended as the means of demonstrating non-financial performance under the PGPA Act 2013 (Mackay, 2011; Morton & Cook, 2018). That 1993 conclusion on ‘embedding’ is the starting point of this research, which reviews whether past APS experiences with management reform can contribute to the implementation of the PGPA Act and enhance current implementation theory with an Australian context.

The background is the earlier Royal Commission’s call for such a context for public sector change. The RCAGA observed “Sensible judgement about changes necessary cannot be made unless the administration is studied and judged in the context of the Australian system of government” (RCAGA, 1976: p.11). That Australian system is federal: the national Government in Canberra “recognises that the States have primary responsibility for many areas of service delivery, but that coordinated action is necessary to address Australia’s economic and social challenges” (Council of Federal Financial Relations, n.d.; Craft & Halligan, 2017). Tiernan concluded “a narrative of failure permeates debates about Australia’s federal system” (Tiernan, 2015c: p. 298). The background to this thesis is that federal system of government. Page | 22

These comments on the Australian system of government and the accompanying Australian system of administration provide the limited context of this thesis, in its examination restricted to this management reform of the Australian Public Service.

2.3 Implementing Managing for Results and Program Evaluation The MfR reforms introduced outcome-focussed management into APS administration. This involved a focus on performance for results, emphasising “economy, efficiency and effectiveness” (Yeatman, 1987: p.346). The implementation of those reforms was evaluated as successful in creating an outcomes-focussed management culture (TFMI, 1992). The Report was submitted to the Management Advisory Board (MAB), which advised the Government on APS management. It was noticeable that the members of the MAB receiving that evaluation of the MfR consisted of six of the same Secretaries whose implementation of that same reform had just been evaluated (e.g. MAB, 1993). In retrospect, this should have raised concerns about conflicts of interests by those six Secretaries. Despite this report, there were early concerns for the accountability, implementation and continued momentum of MfR reform.

The momentum of implementing MfR had been too slow. Reviewing the implementation of the FMIP and program evaluation in 1990, a Parliamentary Committee concluded there was “a need to maintain the momentum for a commitment to reform” (SCFPA, 1990: p.xiii). The ANAO also found “evaluation was in general receiving a lower priority in agencies than the ANAO believes was desired by the Government. There is a need for a greater commitment to evaluation on the part of senior management” (ANAO, 1991: p.7). Slow implementation was a factor in the evaluation of the MfR: “the reforms have been moving in the right direction but neither far nor fast enough” (TFMI, 1992: p.p. 26-27). This was an early identification of the need to maintain reform momentum over extended time (Jansen et al., 2016). No means to ensure this momentum were identified in the evaluation of MfR implementation.

The MfR reforms were implemented centrally and exemplified a top-down implementation process, undertaken by the Canberra-based Finance Department (MAB, 1993; Nethercote 1984, in Mascarenhas, 1990; Sabatier & Mazmanian, 1980; Van Meter & Van Horn, 1975). Central assistance was offered by staff of the Evaluation Branch of that Finance Department, to help departments evaluate their programs and assess their ability to meet government Page | 23

objectives (Dawkins11, 1985; Mackay, 1994). These were indications of central design of reform and specialised staff skills. Claims of positive impact for the MfR reform were made early in its implementation.

Implementation success was claimed after eight years, in 1990. The MfR reforms had created a culture of “managing for results in the public interest” (Keating12, 1990: p.387). This was achieved by increased accountability to government, devolution of management, staffing and financial responsibilities from central agencies (such as Finance and the Public Service Board) to heads of agencies (Secretaries) and assessing expenditure effectiveness (Keating & Holmes, 1990). Those conclusions emphasised the connections between top-down government priorities and achieving results in government programs. A closer inspection of the MfR evaluation showed incomplete implementation and embedding, as regional staff were more concerned with internal factors such as “further job losses and workload increases” (TFMI, 1992: p.488). This demonstrated an implementation gap, as many regional staff did not share head office’s views about the benefits of reform changes. These regional concerns echoed the conclusions of Pressman & Wildavsky (1973) about implementation going awry between head office (Washington) and the distant region (California). Weller (1993) identified this difference as between developing national policy at the centre (head office) and its later delivery as a program in the regions.

Differences can occur between national policy agreed by Government and its later delivery. The differences in implementation are between central office’s national viewpoint and the delivery activities of APS staff in the regions, or in other agencies in the governments of the States and Northern Territory. Factors in incomplete reform have included the APS size, the diversity of its tasks and its geographical dispersion illustrated in Figure 1 following (TFMI, 1992). An underlying factor in the analysis of this thesis is the declared core APS value of “a close focus on results in a single, integrated Public Service” (MAB, 1993: p.p. 5-6). This set up a tension between single-APS reforms and the management discretions of individual APS Secretaries. This was exemplified in the conclusion by the-then Finance Secretary that there should be more use of evaluation accompanied by changes in managers’ readiness to use the

11 Minister John Dawkins was Minister for Finance between 1983 and 1984. 12 Dr Michael Keating was the Secretary of the Commonwealth Department of Finance between 1986 and 1991. Page | 24

reforms (Sedgwick13, 1994). This chapter will now detail the accompanying management challenges to implementing the changes of management reform uniformly across that single APS, which is “an apolitical public service that is efficient and effective in serving the Government, the Parliament and the Australian public”14. The APS structure, the national locations of its staff and the organisation of a typical APS agency follow.

2.4 The APS: Staff, Centre, Management and Departmental Structure The APS consists of eighteen Departments detailed in Appendix 1, four Departments which only support the Australian Parliament, with one hundred and sixty-four other specialised agencies: such as the Australian Bureau of Statistics, Bureau of Meteorology, National Archives and The National Blood Authority (Finance, 2018a). As at June 2018, the APS totalled 150,594 staff, with most being located in the States and Northern Territory (NT) and thirty-eight per cent in Canberra. These staff were distributed throughout Australia as follows:

Figure 2 Locations of APS Staff Nationally

2,080 (1.4%)

16,927 (11.2%)

6,846 8,952 (4.5%) (5.9% 28,076 (18.7%)

CANBERRA 56,961 (37.9%) 25,397 (16.9%)

Source: APSC (2018a) 3,854 (2.6%)

13 Mr Stephen Sedgwick was Dr Keating’s successor as Finance Secretary (1992-1997) and later was the Australian Public Service Commissioner between 2009-2014. 14 Public Service Act, 1999, S.3. At http://www.austlii.edu.au/au/legis/cth/consol_act/psa1999152/s3.html Page | 25

Most (sixty-two per cent) of the APS staff are located in APS regional offices. Examples of the distances involved in spans of managing reform implementation and final impact include: Canberra-Perth = 3,720 kms; Canberra-Darwin = 3,133 kms; Canberra-Brisbane = 944 kms15.

The APS policy and management centre consists of three agencies. These are the Prime Minister’s Department (PM&C), Finance Department (DoF) and the Australian Public Service Commission (APSC) (Di Francesco, 2000; Hamburger, 2007; Sedgwick, 2011a). Auditing the APS’s subsequent performance is undertaken by the Australian National Audit Office: ANAO (Talbot & Wiggan, 2010). Each of these APS Departments consists of policy, management and program delivery responsibilities, nominally exercised by the Secretary (the agency Head).

Those responsibilities are outlined in Figure 3. There are three management strata of the Senior Executive Service (SES): Deputy Secretaries, First Assistant Secretaries and Assistant Secretaries; then Executive Levels (EL) two and one and the Administrative Service Officers (ASO) six to one. Many Departments have branch offices in the States and Northern Territory, delivering program services to the public (Matheson, 2016). The specialised agencies also have chief executive responsibilities for implementing reforms or programs, but with fewer managers and flatter staff structures (Finance, 2009). A typical Departmental implementation structure with responsibilities in both head office (Canberra) and its regional offices follows.

15 Each as calculated on https://www.distancecalculator.net/ (accessed 23/7/19) Page | 26

Figure 3 A Typical APS Departmental Structure

SECRETARY

DEPUTY DEPUTY (Possibly more, SECRETARY SECRETARY according to SES 3 the size of the agency.) SES 3 First Assistant First Assistant First Assistant Secretary Secretary Secretary (FAS) SES 2 SES 2 Regional SES 2 Offices Assistant in States (With similar Secretary staff structures, SES 1 & Northern Territory for each FAS.)

Director(s) EL 2 Director with similar EL 2 staff

Assistant Assistant Director Director(s) EL 1 EL 1

ASO 6 ASO 6 (With similar staff).

ASO 5, 4, 3, 2, 1

Source: adapted from Health (2018a).

This structure exemplifies the multiple spans of management in the head offices of Canberra. Those senior managers are responsible for both policy advice to the government and effective service delivery (APSC, 2009). Whether this has occurred will be examined, by drawing upon APS practice after that 1992 Report. This draws out the interactions between the practice of management reform, implementation theory and successfully embedding reform changes.

2.5 APS Management Reforms and Change Management Theory This section considers what successful implementation of management reform would look like, by examining the place of effectiveness in APS reforms. It examines the long-term impact on the MfR reform, of management, staffing and financial responsibilities being devolved from the centre of the Finance Department and Public Service Board to individual APS Secretaries (Keating, 1995; Stewart & Kimber, 1996). Despite claimed changes in APS performance culture and practice because of the MfR reform, those changes did not endure after 1997 (Keating, 1990; Mackay, 2011; TFMI, 1992). This absence was only realised by the centre nearly twenty years later (Finance, 2014). As contributions to current implementation and change management theories, a summary of APS management reform cycles follows. Page | 27

Public sector management reform can occur in series (Cuban, 1990). Figure 1 set out other APS management reforms and their timelines between 1983 and 2018. The reform of MfR was regarded as achieving its purpose by the early 1990s, as it was “overtaken by other reform agendas” (Wanna, et al., 2000: p.201). This was not shared by the centre, where accountability concerns were raised by both a Parliamentary review of the MfR and the accountability centre: the ANAO (ANAO, 1991; SCFPA, 1990). Their reservations concerned senior management priorities for maintaining commitment to this reform. These concerns about commitment contrasted with the mandatory requirement from the MfR reform: that each agency Secretary develop annual Portfolio Evaluation Plans (PEPs). Those PEPs ensured that the effectiveness in meeting government objectives could be demonstrated throughout the APS (Mackay, 1992). Despite those concerns about evaluation by the ANAO, PEPs disappeared in 1997.

The absence of those PEPs reduced MfR reform momentum. Evaluation was considered too process-oriented and the Government accepted Departments’ concerns that producing a lengthy PEP and conducting evaluations were administratively burdensome (Mackay, 2011). This demonstrated one aspect of time examined in this thesis: the short-term nature of those original reform priorities. Important in 1992, those evaluation priorities were no longer required in 1997 (Mackay, 2011; TFMI, 1992;). The earlier devolution of program evaluation to Departments was claimed to “enable Secretaries and heads of agencies to take charge of performance management in their organisation” (ANAO, 1997: p.7). The evidence was not clear for this conclusion, as devolution introduced significant discretion in an individual Secretary’s priorities for the management and performance of each Department.

In 2003, those priorities of individual Secretaries became subject to central monitoring. The creation of the Cabinet Implementation Unit (CIU) in the Department of Prime Minister and Cabinet (PM&C) was regarded as “strategic steering from the centre” (Halligan, 2005: p.28). It reflected renewed interest in the central monitoring of the changes expected from Cabinet decisions and the associated priority of their implementation effectiveness. The CIU did not independently evaluate the effectiveness of the eventual program outcomes and depended on traffic light progress reports (red, orange, green) to it and Cabinet, from the implementing agencies (Hamburger, 2007; Wanna, 2006). This represented accountability tensions between management devolution to independent Secretaries and the centralised reporting of whole-of- APS outcomes to Government. These tensions were only apparent later. Page | 28

They were later demonstrated by former senior APS managers of the MfR reforms. Mackay (1994, 2011) was Evaluation Head in Finance during the MfR years and Hawke (2012) was an SES Branch manager in Finance, at the same time (di Francesco, 1999; Hawke, 2007). They identified the lack of evaluation evidence in government policy-making and of uniform information to central management on agency and APS performance (Hawke, 2012; Mackay, 2011). Made fifteen years after evaluation was abolished, their assessments reveal two factors relevant to implementation. The first is the value of analysing change over extended time and the second is the impact of the central Department (Finance) on implementing APS management reform. They also identify the challenges to reforming the single APS.

Reforming the single APS is now considered to have been hindered by the devolution of management, financial and staffing responsibilities to each Secretary. Initially, the devolution of those management and performance responsibilities from the APS centre to each Secretary was applauded (Keating & Holmes, 1995). Some concerns had been raised about the real central authority over devolved agencies, because that devolution had led to change being resisted (Aucoin, 2012; Halligan, 2018). Only some twenty years after that policy of management devolution was it acknowledged as having “gone too far, fragmenting government” (Podger16, 2018: p.208). There have been two contrasts in the expected uniformity of implementing management reform across the APS. The first was that evaluation had been devolved to Secretaries; the second concerned the same single APS being expected to undertake better evaluations, but when left to the discretion of each agency’s Secretary. The APS lost an embedded culture capable of demonstrating implementation and program effectiveness through evaluation (Hawke, 2012). The practice-based comments of Hawke, Mackay and Podger show the importance of extended time in the implementation perspective of several decades being adopted in this thesis, especially in examining the APS reform cycle.

As outlined in figure 1, that APS reform series began in 1983 with the election of a reformist Labor Government. Earlier, strengthening the APS-wide responsiveness to government was a rationale for reform of the 1976 Royal Commission, which emphasised ‘effectiveness’ in APS administration and program delivery throughout its Report (RCAGA, 1976). The Royal Commission both defined this as “an action or program is effective if it achieves the purpose

16 Between 1982 and 2014, Andrew Podger was successively an SES Officer in the Finance Department during the MfR years, then Secretary of three APS Departments and finally Australian Public Service Commissioner. Page | 29

for which it was initiated” (p.31) and recommended such assessments be undertaken regularly: “progressive review of the effectiveness of continuing programs should be undertaken” (p.49), by program evaluation (p.205). The high-level priority of a Royal Commission does not lead to maintaining reform, if its initiators later lose either interest or office (Wilenksi, 1979). As Guthrie & Parker (1999) identified, this was the initial fate of the Royal Commission Report whose implementation was disrupted by an indifferent government between 1976 and 1983.

Such disruption threatens APS reform implementation. This occurs when the reform agenda is no longer a government priority, or Ministers of the day lose interest either in the policy or its implementation (Di Francesco, 2000; Prasser, 2004; Radin, 2000; Tiernan, 2015a; Wettenhall, 2013). This identifies the risk to the momentum of reform changes, from their initiators’ later absences and impacts. It also identifies a gap in time, between reform being initiated and the required management of it over the long-term. It is now recognised that “[n]o reforms have yet succeeded in embedding a performance focus into the workings of the Commonwealth public sector as a whole” (Finance, 2014: p.1). Long-term management reform of the APS has not been achieved. although no supporting evidence was provided. There is a gap in research, between the initial implementation of APS reforms and applying any eventual lessons in practice (e.g. Head & O’Flynn, 2015; Lindquist, 2010). More comprehensively, O’Flynn has concluded “we do not have very robust conceptual models to understand public sector reform” (O’Flynn, 2015: p.21). Two alternative models from APS practice for understanding the outcomes from APS reform and performance are outlined next.

2.6 Two Alternative Models: Agency Annual Reports; ANAO Audits The outcomes of APS performance-related reform appear in agencies’ Annual Reports. Such Reports are then examined by Parliamentary Committees and in ANAO audits, beginning to answer the question: what would reform success look like? (ANAO, 2003; Pollitt, 2001; Rhodes, 2016). Formerly, the results of evaluations provided the following two answers about implementation success. APS management could use evaluation to assess the effectiveness of new policies or reforms and those same results demonstrated APS accountability to Parliament (Dawkins, 1985; Keating, 1990; SCFPA, 1990). There were three uses of program evaluation: (1) financial: by assisting in the initial preparation of Budgets (Mackay, 1992); (2) feedback to Parliament on APS achievements from this Budgeted expenditure (Coates, 1992); (3) assisting the ANAO fulfil its obligations to Parliament (Taylor, 1992). These three levels of accountability and performance information resulted from evaluations. Page | 30

However, the practice of evaluation in the APS was no longer required after 1997. This was because of tensions between national government programs and the operating priorities of individual agencies and their Secretary’s management practices. One result of this tension was the abolition of agency Evaluation Units (Mackay, 2011). The Departmental Evaluation Units had appeared to serve the interests of departmental management and not those of the Government, concentrating on “micro-management issues, rather than on the fundamental question of programme effectiveness in meeting designated objectives and national objectives” (Guthrie & English, 1997: p.162). Alternatives to agency program evaluation have included APS Annual Reports and evaluations by the independent ANAO, which are considered next.

Annual Reports are statements of agency performance, especially in their implementation of performance reforms. They demonstrate agency accountability to Parliament, although these Reports have not always been adequate in the quality of their performance information (Bartos, 1995; Davis & Bisman, 2015; Milazzo, 1992). Use by agencies of the mandatory central Guidelines on Annual Report preparation was not policed by the Prime Minister’s Department and the Reports have not been satisfying Parliament (Davis & Bisman, 2015; Milazzo, 1992, Thomas, 2009). Agencies later moved away from demonstrating non-financial program achievements, instead to being accountable for their expenditure, because of the adoption of the later financial management reform of accrual accounting in 1998 (Guthrie et al., 2003; Parker & Guthrie, 1993). This demonstrated two factors in this review of APS reform practice: the reform series and the lack of embedded practice.

Further embedded management practice has been repeatedly attempted by the ANAO. The ANAO had jointly introduced Better Practice Guides in 1996 (performance information), 2004 (annual performance reporting), public sector governance (2014a) and successful implementation of policy initiatives (with PM&C, 2014). They represented shifts in best practice performance from practice by individual agencies to Audit Office Guides, offered as guiding source documents to managers in those agencies (Barrett, 2004b; McPhee, 2011). Accountability for this performance also shifted, from internal frameworks (of agencies’ evaluation plans and Annual Reports) to the external (via ANAO audits).

The earlier 1992 claim of embedded reform was later found to be absent in internal practice by APS staff. These ANAO Guides had been derived from the finding that “agencies did not have suitable performance measures in their Annual Reports relating to the quality Page | 31

outputs/administered items or effectiveness/impact indicators for outcomes” (ANAO, 2003: p.10). This led to an “APS Guide to Better Practice in Annual Performance Reporting”, defining and measuring program achievement with SMART performance indicators: Specific; Measurable; Achievable; Relevant; Timed (ANAO/DoFA, 2004: p.13). These Guides on best practice were a shift back to the accountability centre of PM&C, Finance Department and Audit Office, from the earlier devolved expectations that each agency would demonstrate its own effective performance. They were also guides and benchmarks to any later ANAO audits (Barton, 2009). Despite those Guidelines from the centre, there was a gap in APS practice.

Much non-financial performance information was not being reported in agencies’ Annual Reports. While the usefulness of this information was appreciated by managers (when used), this non-reporting was a problem despite a decade of such reform (Lee & Fisher, 2007; Lee, 2008). The limited use of non-financial performance information in Annual Reports restricted the value of those Reports in agencies’ accountability to Parliament, especially through any data from program evaluations (Coates, 1992; Lee, 2008, Thomas, 2009). Complementary Ministerial and accountability frustrations were evident in the contemporary conclusion of the- then Finance Minister Tanner that the APS outcomes and outputs framework had not worked and “reporting of outcomes [is] seriously inadequate” (Tanner, 2008: p.4). This inadequacy was confirmed by the centre, when the-then Secretary of the Finance Department concluded “there is opportunity to reform evaluation given the significant momentum to improve APS performance” (Tune, 2010: Slide 31). Although the Secretary made passing references to ANAO reports on performance and the need to embed accountability (Tune, 2010: Slides 11, 17, 24), no references were made by either the Minister or his Secretary to the ANAO Guidelines or their use in demonstrating APS performance. This could be regarded as examples of both organisational amnesia (Wettenhall, 2011) and what Finance eventually acknowledged (Finance, 2014) was the lack of permanent impact of past management reforms.

A past example was central guidance on APS performance reflected in SMART Key Performance Indicators (KPIs). These were used to report achievements against objectives (ANAO, 2011). The impact of the ANAO’s earlier work (ANAO, 2003) was strengthened by it being given legislative powers17 in 2011 to audit agencies’ performance measures. The ANAO concluded that “it is time for greater attention, investment and resourcing to be given

17 In Section 18A of the Auditor-General Act 1997 - https://www.comlaw.gov.au/Details/C2015C00210 Page | 32 to the quality and integrity of KPIs used by public sector entities to inform decisions about the performance of government programs” (ANAO, 2013b: p. 21). This legislated power given to the accountability centre (ANAO) was one indication of the long-term impact of abolishing central Portfolio Evaluation Plans in 1997 and the significance of common models of APS performance (Mackay, 2011). Being close in time to the earlier interest by Minister Tanner in performance, it became a re-application of APS uniformity in defining program performance and ensuring the comparability of measuring the results of APS program performance.

These developments over 2010-13 reveal a decline in the APS culture of ‘managing for results’. They question earlier claims of it being embedded in the APS (TFMI, 1992). Although the ANAO and Finance issued Best Practice Guidelines on annual reports, there was no evaluation of their implementation (ANAO/DoFA, 2004; Barrett, 2012). This deficiency was addressed by the ANAO later auditing the implementation of its earlier Audit Reports (e.g. ANAO, 2013a). This ANAO practice brought out tensions between the centre and devolved APS management that continue to exist. These developments renewed central interest in program implementation and demonstrating APS performance. Tiernan (2015b) has suggested that the next step in public sector reform should involve developing the public administration skills of career officials. Background to the links between a Secretary and the agency staff responsible for implementing reform is in the following review of the reform series in these APS staff skills.

2.7 Reforming the Skills of APS Staff Defining core APS-wide staff skills in managing for results was a further management reform in 1993. Developed from the 1992 National Training Agenda, staff competencies developed by the Joint APS Training Council covered both senior officers and administrative officers (Hall, 1995; JAPSTC, 1992). Those competencies were intended to “incorporate aspects of the Government’s reform agenda and thereby reinforce the policies and values promoted in “Building a Better Public Service” [MAB, 1993] (Ives, 1995: p.331). At the middle management levels, this reform set out the management and administrative skills to both deliver and evaluate results (Dixon, 1996). The explicit link with the FMIP is in the following “Plan for Results” skill for senior officers (which are now known as the Executive Levels).

Page | 33

Figure 4. Core APS Competencies for APS Middle Managers

Source: JAPSTC (1992). This skill complemented the reform of formal program evaluation required of APS agencies.

Issued in September 1992, these skills supported that MfR reform then being evaluated and which was published in December 1992 (TFMI, 1992). The abolition in 1997 of evaluation requirements (Mackay, 2011) had implications for staff capabilities in implementation, performance measurement, or assessing reform or program effectiveness, that were not recognised at that time. Partially, these implications had been addressed by the ANAO, through best practice guides in Annual Reports (ANAO, 2003) and Key (agency) Performance Indicators (ANAO, 2011; 2013b). These guides have now been withdrawn, because of overlap with other agencies’ guidance material (ANAO, 2018). There is now a further gap between reform implementation at the APS agency level and its practice at the staff level.

This gap has received some scrutiny yet to be fully explored. O’Flynn has concluded “we have little serious understanding of the competences and capabilities of the current APS, and we are not well-placed to predict what these might need to be in the coming decades as we enter yet Page | 34 another era of reform” (2011: p.312). That was one indication of deficiencies now identified in APS capabilities: for developing outcome-focussed strategies and managing program performance (APSC, 2013e). Negative outcomes from past reforms have been losses in institutional capacity and long-term planning, plus inability to deliver policy reforms (Tiernan, 2015b). Despite her reviewing forty years of APS reforms, there were no references in Tiernan’s conclusions to evaluation theory or the earlier MfR practice. A further identification of staff skills in public administration suggested seven: counselling; stewardship; practical wisdom; probity; judgement; diplomacy; political nous (Rhodes, 2016: p.p.5-7). They did not include the ‘plan for results’ of figure 4, illustrating organisational amnesia and the cyclical nature of APS management reforms (Barrett, 2014; Wettenhall, 2011). This cycle follows.

2.8 Past Management Reforms: Coming Around Again Beginning with MfR (1980s-1996), this series returned in the similar non-financial performance requirements of the PGPA Act 2013. The MfR reforms were evaluated in 1992 and resulted in the claim that formal program evaluation was embedded in all agencies (TFMI, 1992). However, this was contradicted by the contemporary “urgent need to press home the changes [of the MfR reforms] and embed them more firmly in the working culture of the Public Service” (MAB, 1993: p.v). This lack of embedding reforms can be seen in a further APS reform of 2010: “Ahead of the Game” (AGRAGA, 2010). After assessing that the earlier MfR reform had lost its momentum, AGRAGA identified the need for the APS to deliver “effective programs and services [by] measuring performance” (Lindquist, 2010: p.117). The evidence for this need and the lack of APS ability to deliver programs effectively were not demonstrated.

The management reforms of 1983-2013 display repeated objectives. They began with ‘meeting government objectives’ (RCAGA, 1976) with ‘measures of performance’ (Coombs, 1976: p.36), to ‘manage for results’ (Keating, 1990) and ‘more focus on outcomes’ (1996 Commission of Audit). This theme of ‘results-focussed’ is also evident in the formation of the Cabinet Implementation Unit in 2003 (Wanna, 2006), evaluating program results (Tanner, 2008), that the APS must be outcomes-based (AGRAGA, 2010) and finally under the current PGPA Act, which emphasises assessing the performance of agencies in achieving their purpose (Finance, 2017a). The connection between that ‘performance’ and evaluation has been expanded in Finance’s Resource Management Guide 131 and this extract: “comprehensive evaluations are the best (and sometimes only) way to assess the performance of complex activities – especially those that have a large number of interdependent elements delivered by Page | 35

multiple entities” (Finance, 2017f: p.51). The following Figure 5 sets out these perceived repeated links and their continuity. Figure 5 30 Years of Similarities in Management Reform Timelines.

Source: Researcher assessments of the successive reforms and their common elements. Page | 36

Such repetition and links have not previously been researched and illustrate the importance of the long-term perspective of APS reform taken in this thesis.

Research into the reasons for the existence of those APS reform series has largely been absent. There was indirect reference to earlier reforms not being embedded in the conclusion that the pre-2010 reforms “had not been grounded into the repertoires of the APS” (Lindquist, 2010: p.125; Prasser, 2004; Thomas, 2006). No research was provided for this finding and (by contrast) Lindquist concluded that “APS’ responsiveness and pragmatism will increase the chances of success over the longer term” (Lindquist, 2010: p.144). Alternatively, O’Flynn (2011) asserted the ‘Ahead of the Game’ reform (essentially that the APS must be outcomes based: figure 5) could not be implemented because of long-term APS staff de-skilling, which was particularly evident in implementation capacity (Althaus, 2011). Neither referred to the earlier demise of the complement to implementation: assessing effectiveness with evaluation skills (Mackay, 2011). Such evaluation skills are now recognised as being needed in implementing the current Public Governance, Performance and Accountability Act (PGPA) Act 2013 (Morton & Cook, 2018). Although the Act is principles-based (Barrett, 2014), Section 38 requires that “(1) The accountable authority of a Commonwealth entity must measure and assess the performance of the entity in achieving its purposes” (PGPA, 2018). Two senior officials of the Finance Department (Morton & Cook, 2018) now recognise that program evaluation can contribute to this measurement and assessment. In the current APS review, explicit links have been made with the 1976 Royal Commission, as providing lessons from the past (Australian Government, 2018b; Turnbull, 2018). Such repetition highlights the continuity of management reforms and the difficulties of embedding the management reform of ‘managing for results’ and its principal element of program evaluation in the APS.

This thesis analyses those difficulties. It examines the two significant responsibilities of APS Secretaries for initiating reform and the implementation of those reforms by staff across the devolved APS. Three factors contributing to effective implementation appear to be missing: the ability of a Secretary to affect those changes throughout an agency, because of the competencies of subordinate staff to implement those changes, over the long-term, consistent with achieving objectives that embed systemic management reform. The current PGPA Act 2013 is a different strategy for implementing systemic APS reform. By being national legislation, this Act adopts a different methodology to the earlier implementation of the MfR reform. That 1980s reform was implemented by administrative fiat to Secretaries and Page | 37

accompanied by administrative guidance issued to staff. This compares with the national requirements of the PGPA Act now mandatory for all APS agencies.

The PGPA Act mandates national performance standards in APS agencies. It “consolidates the governance, performance and accountability requirements of the Commonwealth into a single piece of legislation, setting out a framework for regulating resource management by Commonwealth entities” (Finance, 2017a: p.3). Demonstrating these national requirements has represented on-going design challenges to APS policy and program implementation. Previous soft reform processes have included Parliamentary Committee Reports, ANAO Best Practice Guidelines and advisory recommendations from the centre (such as the Finance Department and the Australian Public Service Commissioner). Although containing similar features, priorities and implementation challenges as the earlier MfR reform (Figure 5), the PGPA Act has been regarded as new (Barrett, 2014). Similarly to the ‘managing for results’ reform, heads of agencies are again required to “measure and assess the performance of the entity in achieving its purposes” (Finance, 2017a). The Finance Department has issued central guidance on measuring and recording non-financial APS program performance (Finance, 2017a, 2017g). The ANAO has, however, reported limited compliance in three agencies (ANAO, 2017). Recently, a more-comprehensive review of the implementation of the PGPA Act across the APS concluded that the “quality of performance reporting needs to improve (Alexander & Thodey, 2018: p.3). There have now been repeated calls over thirty years to improve the definitions of APS performance and meeting government objectives, as revealed in the thirty-year APS management reform series of Figure 5. These similarities were the need for regular Reports on agency performance information, developing APS evaluation skills and using evaluation results. By their repetition, they reveal gaps in APS practice.

A gap in APS practice is evident between 1992 and 2013. The evaluation of MfR and its component of program evaluation concluded the APS was capable of effective program performance and that this had become part of APS culture (TFMI, 1992). This gap has now been revealed by the legal requirements to demonstrate non-financial performance that are now mandated for the APS in the PGPA Act 2013, where the practice of evaluation is again being emphasised (Morton & Cook, 2018). This chapter adapted part of Pettigrew’s research framework on context and extended time, to examine the implementation of management reform in the APS over an extended timeframe of thirty years. By linking the commonalities of those reform objectives and actual practice, this chapter has raised concerns that they did Page | 38

not become embedded and highlighted that there have been repeated management reforms of the APS. Some of those implementation challenges follow.

2.9 Implementing Management Reform of the Single APS The APS poses challenges for uniform implementation of the changes of national reform. As a single entity, it consists of 150,594 staff (APSC (2018a). Some sixty-two per cent of APS staff are located outside Canberra (Figure 2) and their geographic dispersion has been an acknowledged factor in the failure to achieve uniformity in reform change especially in the MfR (Matheson, 2016; TFMI, 1992). Initial reform terminology lacked a common understanding by staff of the meaning of a ‘result’ and there was no consistent application of these reforms in the active management of national programs (SCFPA, 1990). The means of reaching all the APS staff and ensuring their uniform take-up of reforms have been under- researched (Beer, 2009; Mathieson, 2017). Although there is ‘an APS’ (in the singular), in practice it consists of those eighteen departments of state managed by eighteen Secretaries responsible for each agency and driving their own reforms. Such devolved authority challenges any effective impact by the APS Head, the Secretary of the Prime Minister’s Department.

That authority of the APS Head stems from being the Chair of the Secretaries Board, which is responsible for achieving outcomes across the APS (APS Review, 2019). An unexplored implementation factor has been how that APS head can bring about consistent and uniform management reform across the APS and its eighteen departments. As highlighted in figure 1, these consistent reforms in relatively short time periods also challenge the management attention and time that can be given to implementing any individual reform. This further questions how reform momentum can be maintained over the extended time required to embed the changes of any one reform. While it has been considered desirable to embed management reform in permanent APS practice, no framework has been developed to accomplish this.

No APS performance reform has become embedded in APS practice. This was a belated realisation by the Finance Department, as background to the introduction of the PGPA Act (Finance, 2014). The repetition of such performance reforms (Figure 5) during 1984-2014 calls into question the implementation frameworks used in those years and their effectiveness in making reform change permanent. The single evaluation of implementing a management reform (TFMI, 1992) suggests that APS management reforms are more frequently initiated than achieved. Sometimes labelled a reform syndrome, repeated reforms also risk upsetting Page | 39

organisation stability and administrative continuity (Hood,& Lodge, 2007; Wettenhall, 2013). A further attempt in 2003 at central implementation influence through the Cabinet Implementation Unit of PM&C was not mandated, as its requirements were advisory only (Wanna, 2006). In practice, central priorities of timing and implementation had been attempted through a Best Practice Guideline: Implementation of Programme and Policy Initiatives. Making Implementation Matter. Illustrating the renewed desired impact of the APS centre on reform and implementation, that Guideline advised (but did not mandate) “assessing the quality of the initiative post-implementation” (DPM&C/ANAO, 2006: p.53). That Guideline was abolished 2018 (ANAO, 2018). This demonstrates a current gap in implementation frameworks for the single APS and calls into question their past effectiveness.

There is no evidence18 of any knowledge transfer of research on implementation or effective take-up of that Best Practice Guideline by the APS in the early 2000s. Earlier, the-then Auditor-General had exhorted agencies to evaluate and monitor continually, because they had not given “sufficient attention to planning for implementation” (McPhee, 2007: p.xvii). A further urging from the centre by the Australian Public Service Commissioner recognised the connections needed between program management and change, in order to be effective in achieving government objectives (Briggs, 2007a). Both calls from the APS centre lacked any implementation follow-up and evaluation.

The lack of APS staff with appropriate implementation and delivery skills is being recognised (Althaus, 2011; Tiernan, 2015b). This concern has been acknowledged by the APS centre: “What is needed is a measurement framework capable of operating in all government agencies and capable of simplifying and streamlining the measurement and evaluation process” (Finance, 2013: p.34). Support for those concerns has come from a separate examination of Government initiatives, which linked failures in their implementation with poor design of those policies and reduced APS capabilities (Banks, 2014; Shergold, 2015). These assessments illustrate the lack of impact on APS practice of research into theories of implementation, or effective program delivery, or the place of long-term change management. Some of the attempts to make the changes of management reform stick are discussed next.

18 A search of Google Scholar for “Australian Public Service Best Practice Guidelines 2003-2018” on 5/6/19 only revealed multiple references to Best Practice Guidelines for private sector health professionals. Page | 40

2.10 Making APS Management Reform Stick Management reform in the APS has been attempted, but it has not stuck. A series of management reforms has been initiated formally (figure 5) and ANAO Best Practice Guides were an advisory means by the centre (ANAO, 2017a). These Guides have now been withdrawn (ANAO, 2018). The central impact of the ANAO can no longer be assured, as APS agencies have separately failed to implement formal recommendations from the ANAO. For example, the ANAO found that “apart from DEEWR, none of the agencies included in this audit had developed structured implementation approaches in relation to implementation of ANAO recommendations” (ANAO 2013a: p.13). This decline in APS implementation skills was a significant example of the centre’s lack of influence on the APS, despite its former central guidance on administrative practice. The ANAO had formerly exercised this central influence in two Better Practice Guides, which had book-ended each other in their guidance to the APS that re-introduced program evaluation.

Evaluation had been linked with demonstrating APS performance. In their management of implementation through “Public Sector Governance: Strengthening Performance through Good Governance”, agencies were advised to “plan for high performance, and facilitate evaluation and review” (ANAO, 2014: p.21). Issued jointly with the Prime Minister’s Department, the second ANAO Guide had emphasised that implementation required demonstrable outcomes from the “Successful Implementation of Policy Initiatives” and was not a set-and-forget commencement (ANAO/DPM&C, 2014). This had been re-emphasised by the Head of the APS, although through the means of a one-off speech at an Implementation Conference (Parkinson, 2016). This meant achieving policy and performance success had formerly been APS implementation priorities from the centre. The abolition of the Cabinet Implementation Unit in 2015 (Gold, 2017) and the withdrawal of the ANAO Guides in 2018 have weakened the influence of the centre and were further examples of management reforms that did not stick.

This absence of sticking means that a key focus of this thesis (‘embed’) is missing from current APS practice. That ANAO Guide on successful implementation of policy contained an unacknowledged focus on reform momentum and extended time. Those elements from Pettigrew could have been seen in the former central insistence that “policies and programs, when implemented, require active management to be successful, and this involves measurement, analysis, consideration of feedback and complaints, evaluation and review, calibration and adjustment” (ANAO/DPM&C, 2014: p.i). Such management can be seen to Page | 41

require a range of staff and skills. The need for and former use of these Guidelines from the centre raises questions about the impact on the APS of research into implementation and evaluation, relating to reform success.

There is an absence of evidence of APS management reform success. In relation to performance frameworks, the Finance Department concluded “there is scope for improvement at a whole-of-government level” (Finance, 2014: p.1). Reasons for this may have included that the planned devolution of management autonomy to agency heads has diminished the application by central agencies of uniform standards of APS-wide performance management (Holmes & Shand, 1995; Podger, 2018). It was also possible that twenty years of reforms had de-skilled staff to being managers of contracts with non-government organisations, rather than being managers of programs meeting government objectives (Hughes, 2012; O’Flynn, 2011). This chapter has identified lengthy practice-based gaps in implementation frameworks and perceived management reform success.

These gaps are consistent with turns in the management reform series, exemplified in figure 5. Such turns can disrupt the momentum of change in implementing a particular reform. Where one reform is overtaken by later ones, the momentum and extended time for implementing one reform can be displaced by those later reform priorities (Ilott et al., 2016; Lindquist, 2010; Pettigrew, 1990; Sabatier, 1986). This results in the reform cycle potentially becoming a series of initiatives without end (Pollitt & Bouckaert, 2011; Prasser, 2004). To store information about long-term outcomes and serve as on-going APS corporate memory that is independent of management, a performance management system results in reform outcomes being embedded in the organisation (Newcomer & Caudle, 2011; Tingle, 2015; Wettenhall, 2011). Lacking this on-going information system, reforms are potentially “a destination never reached” (Thomas, 2006: p.15). Such management reforms contain an implicit focus on permanent outcomes, as they are “deliberate changes in the structures and processes of public sector organisations with the objective of getting them (in some sense) to run better” (Pollitt & Bouckaert, 2011: p.2). With the emphasis on implementing the changes of reform, the next section briefly summarises how change has been managed in the APS.

2.11 Public Sector Change Management in the APS As a specific field, change management in the public sector has not been extensively researched. Initial research concentrated on identifying the structural changes to the APS, Page | 42 rather than the management of their implementation (Stewart & Kimber, 1996). Public sector leaders were otherwise expected to act as change agents and achieve lasting change, although those changes from public sector reform have proven difficult to evaluate (Boston, 2000; Dixon et al., 1998). One exception identified that successful implementation also required separate change agents, “employed to communicate and facilitate change” (Stewart & Kringas, 2003: p.679). This identified a unique agent of influence.

By contrast, APS leadership has been considered the responsibility of all staff (Althaus & Wanna, 2008). Less attention had been paid to the change management skills needed to implement reforms and ensure they achieve their purposes, by becoming embedded in long- term practice across all levels of APS staff (MAB, 1993; MAC, 2010). This has left un- researched the reliance on the presence and personal styles of such senior leadership with those change management skills. When those senior change agents move on, what happens to maintaining reform momentum and embedding change?

Those senior leaders have typically led the top-down changes of APS management reform. Examples included MfR (Keating, 1990), staff competencies (Ives, 1992), program evaluation in agencies (Mackay, 1994) and “Ahead of the Game” (AGRAGA, 2010). The actual personal characteristics of those senior implementing managers have ranged from enthusiasm and commitment to indifference (Matthews, et al., 2011). Change management skills are generally lacking in the current APS. In reviewing the APS in 2013-14, the APSC observed the continuing lack of change management skills: “less than one-quarter of APS agencies reported their change management capability was at the desired level. Indeed, of the eight capabilities assessed, change management was rated the second lowest. Change management was also one of two capabilities assessed using this method in both 2011 and 2013, where agencies reported that little or no improvement had been made” (APSC, 2014c: p.p.87-88). It is notable that ‘manage change’ had been one of the original core competencies for all levels of APS staff (Ives, 1992). This had not endured over the twenty years later. These absences in staff skills were further indications of the significance of extended time in implementing management reform and evaluating its permanence, or the associated resistance to the reforms.

Management reform can induce resistance to change among many staff. The reasons for resistance include the change being regarded as coercive, where staff lack input into management decisions, or it is undertaken too quickly, where staff do not control the change Page | 43

and jobs are at risk (Andrews et al., 2008; Ryan, et al., 2008). This resistance was acknowledged early in APS reforms, with good communication being required by leaders of top-down change (Ryan et al., 2008; Stewart & Kringas, 2003; Wilenski, 1986). This led to the importance of establishing the organisational readiness for change (Cinite et al., 2009), since effective change management requires both leadership and active involvement of all staff as “individual resistance occurs not as an automatic reaction to change, but as a response to poor change management” (Buick et al, 2015: p.274). It is important that change proceeds beyond its initiation, to its comprehensive management throughout all public sector staff (By et al., 2016). The dispersed locations of APS staff mean they do not necessarily share the central office viewpoint about the benefits of change, which was a lesson from the MfR evaluation that is relevant to the implementation of the current PGPA Act. The place of effective change management in implementing public sector reform is the subject of this study. This chapter has highlighted the assumptions that management reform can be implemented top-down throughout the APS, by announcement from APS Secretaries. Although it may be announced from the top, reform change may not result in enduring change.

2.12 Conclusions This chapter has provided background to the title of this thesis: ‘The Secretary Said: Make it So. Can Change Management Theory Explain the Challenge of Achieving Enduring Public Sector Reform?’ The chapter examined the typically top-down reform process in the APS, provided examples of the virtually-continuous management reform series and described how implementation has been managed and mismanaged. This chapter has summarised the varying influences of the APS centre such as Prime Minister’s and Finance, has established the declining impact of the ANAO and has considered whether ‘embed’ was an outcome of the various performance-related reforms. In the thirty years of continuity in APS management reforms (figure 5), the continued calls for the effectiveness of the APS to be demonstrated were identified. They suggest that those various management initiatives did not endure.

Whether the changes of public sector management reform endure is the focus of this case study. The main aim is to examine if lessons from implementing APS management reform can contribute to enhancing an implementation framework and achieve embedded public sector change. The lessons from this chapter on APS practice cover the reform series, the dangers of asserting implementation too early, the consequent need to include extended time in implementing reform changes, plus the importance of later evaluating reform outcomes and Page | 44

their effectiveness, when considered over several decades. These lessons highlight the importance of taking a cross-disciplinary approach to evaluating implementation theory. Such an approach is illustrated in the repetitions of the management reform series.

That cycle can lead to insufficient management attention over time to embed a single reform. This may occur through reform fatigue, where the repetitiveness of APS reforms becomes a syndrome and an industry (Hood & Lodge, 2007; Radin, 2000; Wettenhall, 2013). Research has mainly focussed on the commencement of reforms (Halligan, 2005; Halligan, 2007; Wanna, 2006; Wanna (Ed), 2007; Lindquist, 2010; Lindquist et al., (Eds), 2011). Practice has now revealed that past management reforms have failed to become embedded in APS practice (Finance, 2014). This conclusion was not supported by evidence and raises questions examined in this study, in relation both to the challenges the APS faces demonstrating APS high performance in practice and to extending implementation theory through a cross-disciplinary framework drawing upon change management, organisation research and evaluation theory.

Implementation theory may be extended by examining past APS practice. There has been renewed interest in creating high-performing APS agencies and extending implementation theory to include evaluation of such reform outcomes (Blackman et al., 2013; Buick et al., 2015; Craft & Halligan, 2016; Hood & Dixon, 2015; Pollitt, 2013). Literature on evaluating the implementation of competing models of public sector management has been modest (Laughlin & Broadbent, 1996; Boston, 2000; Jones & Kettl, 2003; Pollitt & Bouckaert, 2011). Recently, the place of evaluation in measuring APS non-financial performance has been acknowledged by the Finance Department (Morton & Cook, 2018). This has contributed to the further reform of APS non-financial performance now under development since 2013 and illustrates the beginning of a cross-disciplinary approach to reform implementation.

The analysis of this chapter about APS reform practice drew on three fields. By adapting extended time from Pettigrew (1990) and using evaluation theory, a perspective of several decades was used to examine the implementation of APS management reforms. The study also examines if a change management framework can help explain the challenges of implementing management reforms that endure over extended time. Lending weight to this examination is the factor of researching comprehensive change through to its eventual outcome(s), to “confront the challenge of analysing the relationship between the content and process of change and such organizational outcomes as performance” (Fernandez & Rainey, 2006: p.173). The Page | 45

persistence of reform is undergoing re-framing as change management (Shannon, 2017), indicating a shift that may enhance current implementation theory. This suggests a cross- disciplinary approach may be fruitful in conducting this research.

This chapter found there have been ongoing management reforms of the APS over thirty years. The reform of program evaluation failed to achieve its long-term goal of embedding an APS program performance culture. There were patterns of management reform being halted, associated with changes at the top in either Governments or APS Secretaries. This led to decreasing management priorities in maintaining reform momentum, the lack of any structured systems to embed those management reforms and the failure to evaluate the achievements of those reforms. These findings were derived from the fields of public sector implementation, organisation research, change management and evaluation. Those frameworks were considered useful to assist in providing insights from cross-disciplinary research on reform, change management, the factor of extended time and the evaluation of enduring reform impact.

The next chapter reviews current theories in the fields of implementation, change management and evaluation, to establish overlapping elements relevant to the newly-emerging concerns with making reform stick. It highlights the current relevance of research in those disciplines and the current questioning of these research directions, together with mutual learning that might occur for both researchers and practitioners.

Page | 46

Chapter 3 Origins of the Theories: Implementation and Public Sector Reform The introduction of program evaluation was intended to ‘close the loop’ by completing the ‘management cycle’ from planning to budgeting, implementation and monitoring and, finally, evaluation. (Wanna, et al., 2000:212).

3.1 Introduction This chapter integrates four fields of research to address the first question of this thesis: whether the management reform of program evaluation became embedded in the APS. It critically reviews current implementation theory and how APS management reforms have been implemented. It starts with reform defined as “deliberate change to the structures and processes of public sector organisations with the objective of getting them (in some sense) to run better” (Pollitt & Bouckaert, 2011: p.2). Implementation theory will be placed in the five stages of the public policy cycle: “agenda setting; policy formulation; decision-making; policy implementation; policy evaluation” (Howlett & Cashore: 2014: p.23). Implementing reform of the large APS involves interactions between initiation, change in administrative practices and evaluating the impacts of reform. Transforming such a system means re-thinking management reform “as a complex multi-level, dynamic process that requires change at the macro, meso and micro levels of public sector systems” (O’Flynn, 2015: p.19). Some re- assessment of reform implementation has involved asking if it sticks and becomes embedded (Hood & Dixon, 2015; Ilott et al., 2016; Lindquist & Wanna, 2011). By drawing upon four fields of research, namely implementation, change management, organisation research and evaluation, this review further re-assesses the factors in successful public sector management reform.

The logic of this chapter seeks to understand what success would look like in implementing management reform. That logic stems from this conclusion by the Finance Department: “Since the 1980s, the Commonwealth has attempted a number of reforms to improve the reliability and scope of information on the performance of the Commonwealth public sector with mixed results. While the quality of financial information has improved significantly, … the quality of non-financial performance information has not improved to the same extent. No reforms have yet succeeded in embedding a performance focus into the workings of the Commonwealth public sector as a whole” (Finance, 2014: p.1). No evidence was given for this conclusion, which also introduced the factor of ‘embedding’, or in this case its absence.

Page | 47

There is emerging research interest in this absence of embedding management reforms in the public sector. By drawing on the repetition of these reforms, that research is beginning to query the end-state of implementation: management reforms that become permanent, or stick (Hood & Dixon, 2015; Ilott et al., 2016; Lindquist & Wanna, 2011; Pal & Clark, 2015). In examining this absence of sticking, a re-assessment of implementation theory is underway, by re-framing implementation as managing the changes of reform to endure over the long term (Shannon, 2017). By drawing upon these four frameworks, this chapter also considers whether change management theory could guide enduring reform in the public sector.

This review examines the challenges of implementing enduring management reforms. It draws upon past APS practice in implementing the 1980s ‘Managing for Results’ reform (Keating, 1990) and its key component of program evaluation (Mackay, 1994). By reviewing implementation theory in the twin contexts of the public sector reform series (Barrett, 2014; Hood & Lodge, 2007) and the five stages of the public policy cycle, this chapter considers the place of extended time in effective and enduring reform. This chapter first examines those current theories of implementation, change management, organisation research and evaluation, then reviews current meanings of ‘success’. It concludes by identifying gaps in current implementation theory and then summarising the insights from this review.

3.2 Initial Implementation Theory Implementation theory developed initially from a top-down framework. Its lengthy title of “Implementation. How Great Expectations in Washington are Dashed in California. Or, Why It’s Amazing that Federal Programs Work at All” (Pressman & Wildavsky, 1973) was an early demonstration of implementation research in a context of failure. Pressman & Wildavsky established that policies and reforms are not necessarily implemented as originally planned from the top. This distinguished reform intentions (as intended from the government centre in Washington) from the actual (but different) outcomes in distant California. This reflected a design flaw of not specifying those initial, intended outcomes. This flaw was continued in contemporaneous top-down implementation frameworks as exemplified in Figure 6. Page | 48

Figure 6

Source Smith (1973: p.203) There was no identification of outcomes, or impact, or assessing effectiveness.

Implementing such policy became located in a framework of effectiveness. By specifying the “achievement of objectives set forth in prior policy decisions” (Van Meter & Van Horn, 1975: p.447), this framework emphasised designing the intended links between policy decisions, their implementation and their outcomes, although qualified by the difficulty of measuring performance because of ambiguous objectives. This initial ambiguity was addressed in the six implementation factors identified by Sabatier & Mazmanian (1979): (1) sound objectives; (2) unambiguous policy directives; (3) leaders with managerial and political skill, who (4) remain committed to the statutory goals of the policy; (5) support by local but external groups; (6) reform priorities which do not erode over time. This framework incorporated two extra factors: top-level management priority and time (‘remain committed’; ‘not erode over time’). Between the emphases by Sabatier & Mazmanian (1979) on directives and goals (inputs) plus the initial management of implementation (leaders) and Van Meter & Van Horn’s end-state focus on achieving reform objectives a gap remained.

This gap began to be bridged by bottom-up implementation theory, which introduced the concept of backward mapping. By starting with the intended policy effects “at the point of the problem” (Elmore, 1979: p.612), backward mapping began to bridge potential differences in implementation outcomes between the centre and its periphery. A factor distorting effective implementation was the discretion found to be exercised by public servants at that policy periphery of the street level (Lipsky, 1980). This was a geographic factor later found to be underlying the evaluation of the ‘managing for results’ reform (TFMI, 1992). That evaluation discovered that APS staff in the dispersed regional offices did not have the same positive regards about reform success as their senior Canberra managers, who had designed that reform.

Page | 49

In enhancing these early reform theories, six implementation stages were suggested. These six stages consisted of policy outputs, compliance with those outputs, actual impacts of outputs, perceived impacts of policy outputs, and finally revising policy (Sabatier & Mazmanian, 1980). This began to demonstrate two factors: the significance of designing linked implementation steps and that establishing the actual results achieved had become part of that theory. Included in these success factors were two elements for judging the effectiveness of those results: linking objectives and planned outcomes through program design at the inception of reform; feedback on the actual program impacts, through planned evaluation (Wolman, 1981). Those factors emphasised that implementation required both active monitoring of progress and feedback on the eventual outcomes. The title of Wolman’s 1981 research “The Determinants of Program Success and Failure” also drew out the search for frameworks of implementation success.

Awareness had already developed of a gap between reform intentions and any successful impacts. The absence of intended impacts on its periphery of responsibilities by reforms from the centre (Pressman & Wildavsky,1973) was developed into competing theories of top-down (Van Meter & Van Horn, 1975) and bottom-up implementation (Lipsky, 1980). The difference in direction between these two theories illustrates the first theory-practice gap, between the actors and their planned impacts, that were connected by the feed-back loops and suggested time-frames of twenty years (Sabatier, 1986). This established that reform should not be a set-and-forget top-down initiation of its changes, but without identifying in any further detail how those changes would be managed so that they became permanent and embedded. The next section provides background to the second research question: what is the role of public sector change agents in embedding APS management reform?

3.3 Change Management Theory and Implementing Management Reform Public sector management reform means changes in public sector practices. This is the next step in the definition of reform at the beginning of this chapter: “deliberate change to the structures and processes of public sector organisations…..”. (Pollitt & Bouckaert, 2011: p.2). Initially, that implementation process may be assisted by a top-level change agent (Scheirer, 1987), but too-frequent change disrupts the embedding of any single reform (Weller, 1993). The repetition of those frequent changes was an early identification of the reform series and the later dangers of reform fatigue (Holmes & Shand, 1995; Radin, 2000). Implementing the changes of public sector reform can result in outcomes that may or may not be effective, meaning “whether the programme outcomes achieve stated objectives” (Guthrie & English, Page | 50

1996: p.156). A slightly more-extensive definition of the reform process associated with managing for results concludes that “results-oriented management is the purposeful use of resources and information in efforts to achieve and demonstrate measurable progress toward outcome-related agency and program goals” (Wholey, 2001: p.344). Both definitions lack factors of time and managing the necessary intervening changes.

Change theory identifies a new factor of ‘institutionalised’ in implementing management reform. This means anchoring the reform changes in permanent practice within an agency (Kotter, 1995). Kotter’s following eight steps also begin to fill in the challenges of changing public sector practice: establish a sense of urgency; form a powerful coalition; create a vision; communicate the vision; empower others to act on the vision; plan for and create short term wins; consolidate the improvements and produce still more change; institutionalise the new approaches (Kotter, 1995). This began to separate the process of implementing change from those leading it, although the separate change management theory continued to emphasise that leadership was required to sustain top-down reform (Gill, 2002). There was scepticism about theories of successful implementation (O’Toole, 2004) and implementation was considered to be lacking in a more-comprehensive theory (Saetren, 2005). This conclusion contrasted with the earlier eight-step framework of Kotter (1995), derived from the separate discipline of change management in the private sector.

The derivation of Kotter’s framework from the private sector and lack of an evidence base has drawn criticism (Pollack & Pollack, 2015). Kotter’s framework has, however, underpinned past public sector change management (Fernandez & Rainey, 2006; Stewart & Kringas, 2003) and continues to contribute to public sector change management and leadership research (Kuipers et al., 2014; Shannon, 2016; Van der Voet et al., 2016a). Its application in the APS has been variable, in the former Immigration Department (Metcalfe, 2007) and welfare delivery in Centrelink (Halligan & Wills, 2008), but not in “empowering change: fostering innovation” (MAC, 2010), nor in anchoring APS policy reforms (Lindquist et al., Eds, 2011). Kotter’s model has been described as an instantaneous success and considered the model in change leadership (Applebaum, et al., 2012; By et al., 2016). By contrast, it has been criticised as discouraging change by an “over emphasis on a sequence of linear steps” (Hughes, 2016: p.455) and not encouraging formal evaluation of success. Although change in the APS has been described as “an ever-present constant” (APSC, 2014c: p.88) and its successful management frustrates some current APS leaders such as the Australian Federal Police Page | 51

Commissioner (Colvin, 2019), no model of successful change management in the APS has been developed. This thesis contributes to that development.

Change theory has been developed by including the management of successful change in the public sector. Fernandez & Rainey (2006) proposed seven success factors: ensure the need for change; plan; support change; have a change champion; build external support; provide resources; institutionalise change. Their framework did not locate that change champion at the top (e.g. an APS Secretary), but those factors identify the separate place for senior managers of change that is to be institutionalised. Leadership was more than imposing top down reform (Battilana et al., 2010; Rusaw, 2007). The addition of a change champion who could maintain the momentum of a reform (Morrison, 2014) linked the beginning of reform with the processes of change over time. This contributes background to the third research question: how can change management frameworks explain the challenges of implementing APS reform policies?

There has been limited research explaining how the management of change over extended periods of time contributes to effective implementation. Managing the changes of reform involves an additional factor of extended time (Pettigrew, 1990; Ployhart & Vandenberg, 2010; Saetren, 2014), although there has been no consistency in defining that period of extended time for carrying out effective reform change. In practice, these periods have varied considerably: from three years (Hanwright & Makinson, 2008), five and ten (Kotter, 1995), twenty (Bovaird & Russell, 2007) to thirty years (Craft & Halligan, 2017; Hood & Dixon, 2015). During this extended time, implementation can result in unintended outcomes, necessitating on-going monitoring and active management of the changes involved for both the participants and the implementation processes (Ghobadian et al., 2009). Some progress was made in conceptualising successful change through the evaluation discipline (Funnell & Rogers, 2011; Hunter & Neilsen, 2013), but theory-practice gaps have recently been identified as remaining in the change-management literature (Kuipers, et al., 2014). For example, the APS’s Cabinet Implementation Unit (Wanna, 2006) was one means in practice of top-down monitoring of reform over time, but this disappeared in 2015 (Gold, 2017). This disappearance of influence by the APS centre (in the Prime Minister’s Department) has gone unnoticed in research (e.g. Wanna, 2018) and questions the whole-of-APS context for embedding the changes of management reform across the dispersed and devolved APS.

Page | 52

Implementing public sector change involves the participation of lower-level supervisors. A form of transformational leadership is also required from these middle managers, to include all staff successfully in the change process (van der Voet, 2016; Van der Voet et al., 2016a). This introduced two new factors in successful reform implementation: the personal characteristics of direct supervisors below those change leaders, leading to greater commitment to change by the change recipients. These suggest that those individual change agents in charge of top-down reform (such as APS Secretaries) may not be sufficient for achieving effective reform in the public sector. It also highlights the significance of reaching all those staff in national agencies located in dispersed geographic regions and the time required for undertaking such extensive change management. By comparison with the preceding change management over time, early implementation frameworks were incomplete.

This incompleteness related to extended time and effectiveness. One implementation framework contained no factors of time or evaluation (van Meter & van Horn, 1975). The requirements for effective implementation (Sabatier & Mazmanian, 1979) were external ones for maintaining the change priorities and not the factor of extended implementation time, which was also omitted in backward mapping (Elmore, 1980). A subsequent focus on reform success/failure evaluated if that implementation achieved its objectives (Wolman, 1981), although this lacked the factor of examining if implementation continued long enough to have a permanent impact. This began to identify a gap between the significance of ‘achieving objectives’ (possibly in the short-term) and their ‘effectiveness’ (impact in the long-term). Although change management theory was initially derived from the private sector (Armenakis & Bedeian, 1999; Kotter, 1995), it later included change leadership in the public sector (Gill, 2002; Karp & Helgo, 2008). This identified the influence of those direct supervisors below agency heads and senior managers (van der Voet, 2014), as those supervisors need to overcome resistance by general employees to the reform changes (van der Voet et al., 2016a). This began to outline the importance of comprehensive change involving all agency staff.

However, evaluating the success of change management has been largely lacking in the Australian public sector. A comprehensive framework of successful public sector reform change has remained under-developed, as “the normative issue of what precisely a success or failure constitutes is not often discussed” (Kuipers et al., 2014: p.14). This also established subtle tensions in implementation frameworks: between declaring a reform achieved and demonstrating the sustainability of that reform over extended time, or this review’s Page | 53

‘embedding’. These gaps identify complementarities between implementation theory and change management that is both over extended time (in taking time to affect all staff) and effective. Their relevance to organisation theory follows.

3.4 Organisation Theory and Extended Time Organisation theory can contribute to implementation theory, through the extra factor of time. An element of extended time was introduced in the management of change, by evaluating the later results of reform (Wolman, 1981; Pettigrew, 1990), although ‘time’ in implementation theory had not been included in an attempted synthesis of top-down and bottom-up theories (Sabatier, 1986). As derived from organisation research, the factor of extended time is a key feature of managing change, by connecting the “content, contexts, and processes of change over time to explain the differential achievement of change objectives” (Pettigrew, 1990: p.268). This introduces time as an important link between initiating management reform changes and achieving their objectives, while enhancing implementation theory with more- active management of those reform changes over extended time.

Extended time is an implementation factor. Unidentified periods have been required for implementation (O’Toole, 2000), contrasting with a minimum of fifteen years for evaluating public sector change (Boston, 2000). Although implementation was considered to be complex (O’Toole, 2004), that complexity was examined in the public administration field and did not incorporate separate change management theory. Pettigrew’s framework examines the complexity of change over time, links the eventual outcomes relative to those (reform) objectives and bridges theoretical gaps between initiating reform, the complex processes of its implementation and measuring embedded outcomes. It links the start to finish of reform change.

This thesis examines links over extended time in embedding reform change. This means “the key points to emphasize in analysing change are firstly the importance of embeddedness, studying change in the context of interconnected levels of analysis. Secondly, the importance of temporal interconnectedness, locating change in past, present, and future time” (Pettigrew, 1990: p.269). This thesis made a key adaptation of Pettigrew’s extended time. It examined the length of a management reform’s assessed impact against the extended time for embedding such impact. In this research, the case borders include the period of twelve years, between the introduction of MfR’s program evaluation in 1984 (Mackay, 1993) and it being no longer Page | 54

mandatory in 1996 (Mackay, 2011). In analysing the uses of ‘embed’, this defined period of twelve years enables various practice-based and research perspectives to be examined, for their relevance to models of change and performance.

Indirectly, change management frameworks began to influence performance management change. For example, Pollitt (2009a) drew on Fernandez & Rainey (2006), who drew on Kotter (1995). However, the demonstrable impacts of management change continued to be elusive (Pollitt, 2013). Alternatively, implementing policy and management reform was located in a life cycle evaluation framework, which included a desirable time horizon longer than three years (Scheirer, 2012). An attempted connection of performance management, performance measurement and program evaluation did not include managing the associated change (Hatry, 2013). The managers of those changes could be different from the initiators of a reform (Aberbach & Christensen, 2014). This begins to id entify a practical gap between the change leaders at the top and the subsequent change implementers that could disrupt systemic change.

A higher-level disruptor of that public sector change could be identified. This was the absence of long-term political attention spans by governments or Ministers, affecting reform momentum (Pollitt, 2013). The conclusion by O’Toole & Meier (2014) that a more general theory on a management-performance link continues to be lacking has omitted the extra factors of extended time and evaluation already reviewed here. Evaluating large-scale public sector reforms has been rare (Breidahl et al., 2017), highlighting the current separation of theories and research frameworks in implementation, change management, extended time and assessments of reform effectiveness. Their relevance to evaluation theory follows.

3.5 Evaluation Theory This section provides background on evaluation theory. Complexity in implementation is also apparent in evaluation theory, where implementation is regarded as a difficult process of change (DeGroff & Cargo, 2009). Such complexity contributes to the fourth question of this thesis: what insights might be learnt from cross-disciplinary research integrating reform, change management, the factor of extended time and the evaluation of enduring reform impact? Evaluation is a structured way of demonstrating the effectiveness of reform and its success (Bickman, 1987; Poland, 1974). Early evaluation research included designing a program/reform so that it could later be evaluated, which is termed ‘evaluability’ (Wholey, 1987). From that evaluation discipline, program logic is a planning and design feature which Page | 55

links the stages of implementation with the intended and realised outcomes (Bickman, 1987; Horst et al., 1974; Scheirer, 1987). Evaluation’s program logic bears similarities with the flow diagram of Sabatier & Mazmanian (1980) in implementation theory, as it sets out the variables and outcomes in implementation times that are short-term, intermediate and long-term.

An example of evaluation’s program logic follows in figure 7. Figure 7: Program Logic.

Source: McLaughlin & Jordan (1999: p.67). This demonstrates the links over three time-frames between implementation and outcomes. Program logic has been utilised to connect implementation, performance monitoring and evaluating complex interventions (Baehler, 2007; de Groff & Cargo, 2009; Funnell, 2000; Rogers, 2008). The links in the design of program logic shows similarities with the framework of Pettigrew (1990). As an element in designing management reform, program logic links reform initiation, the stages over time of its implementation and later identifying any successes in the long term. The timings of those short/intermediate/long-term factors remain undefined.

Program logic complements the question: how would long-term success in management reform be measured? This focus on success is frequently sought (Fernandez & Rainey, 2006; Funnell, 2000; Marsh & McConnell, 2010; Page & Ayres, 2018). Initially designing the logic of management reform ensures that it is evaluable, meaning it is capable of later being evaluated for its effectiveness and success (Hatry, 2013; Leithwood & Montgomery, 1980). Such formal evaluations are good management practice (Funnell & Rogers, 2011; Poland, 1974; Wholey, 2001). Using program logic also establishes the results that are to be initially expected over three different timeframes: short, medium and long-term. This connects evaluation theory with Pettigrew’s organisation research and its factor of extended time.

This extended time has been absent in the implementation of APS management reform. After ten years of implementation, there were early claims of success in the ‘managing for results’ reform, although suitable evaluation methodologies were not utilised to support those claims Page | 56

(Pollitt & Bouckaert, 2003; TFMI, 1992). The place of time has been recognised as a factor in evaluation: “a longitudinal element would seem to be a vital element in better evaluation” (Pollitt, 1995: p.151). Pollitt did not reference Pettigrew (1990). While this omission indicated one gap between disciplines, it was an implicit link between reform implementation, extended time and evaluation. From that evaluation field, there have been repeated calls to link policy implementation with evaluation (Baehler, 2007; Boston, 2000; DeGroff & Cargo, 2009; Laughlin & Broadbent, 1996; Rogers, 2008). Both program performance management and its measurement could be linked with program evaluation, as they are regarded as complementary with each other (Hatry, 2013; Ryan, 2004). The absence of this link continues to be a feature of implementation theory, as O’Flynn (2015) has concluded there are no strong models for understanding the outcomes of public sector reform. The development of these reform models and their relationship to success follows.

3.6 Success in Implementing Management Reform Despite research over four decades, no overarching implementation framework has emerged. Generally, research on practice had been limited to United States public administration reforms (Elmore, 1979; Kotter, 1995; O’Toole, 2000; Sabatier & Mazmanian, 1979; Van Meter & Van Horn, 1975). There was some inclusion of reform outcomes, through feedback and evaluation, or feed-back loops over extended time-frames (Sabatier, 1986; Sabatier & Mazmanian, 1979). The outcome of ‘successful’ came to feature in the seven-part framework of “Managing Successful Organizational Change in the Public Sector” (Fernandez & Rainey, 2006), with the last stage of institutionalising (embedding change that endures) being a cross-over from Kotter’s last step of change in the private sector. No means were identified by Fernandez & Rainey (2006) for systemically connecting the initial objective with later reviewing its eventual realisation, nor for embedding the desired change(s) if they were successful.

The factor of success has been introduced into implementing public sector management reform. This was qualified by a factor of extended time. Demonstrating any successful impacts requires extended periods of twenty to thirty years, which potentially exceeds the attention spans of top-down political interest (Bovaird & Russell, 2007; Craft & Halligan, 2017; Pollitt & Bouckaert, 2011). Success did become a late implementation factor, examining whether the reforms as they were implemented succeeded (Aberbach & Christensen, 2014; Bovaird & Russell, 2007; Hawke, 2012; Hupe & Hill, 2016; O’Flynn, Page | 57

2015). After that initial implementation, a factor in this framework of success has been the extended time-frames of decades.

Such extended time-frames connects implementation theory with organisation research. This connection over time between the processes of initiating discrete changes of reform and analysing their eventual outcomes had featured in Pettigrew’s longitudinal framework. Although not referencing the similar program logic of evaluation, that framework was a systematic link between “the content, contexts, and processes of change over time to explain the differential achievement of change objectives” (Pettigrew, 1990: p.268). Achieving the changes of reform objectives (Jones & Kettl, 2003; O’Toole, 2004, Saetren, 2005) required leadership of cultural change, which meant “a more systematic approach to cultural change than has been followed up to date” (Thomas, 2006: p.p.67-68). Those top-down and bottom- up implementation theories were the first and second generations of research.

A third generation of implementation theory has not emerged from existing research, when limited to the field of implementation. There has been no framework of effective implementation, linking initiation to eventual outcomes (Hupe, 2014; Montjoy & O’Toole, 1979; Saetren, 2005). Notably, implementation research has not drawn upon evaluation’s program logic, which sets out a structured path of intended outcomes and testing the success of that implementation process over time (e.g. Bozeman & Bretschneider, 1986; Hunter & Neilsen, 2013; McLaughlin & Jordan, 1999; Newcomer & Caudle, 2011; Wholey, 1987, 2001). A framework of successful organisational change in the public sector (Fernandez & Rainey, 2006) contained undefined references to ‘success’ and omitted the evaluator’s meaning: that the intended outcomes were achieved.

Some omissions have been addressed. This occurred through leaders being long-term change agents possessing organisational change and evaluation skills (Battilana et al., 2010; Rusaw, 2007). That began to result in factors of effective reform: initiating change, steering though its implementation and examining reform success by evaluating the eventual outcomes. Those outcomes could also be demonstrated by embedded performance management, by a “Model Performance Management Framework” (Newcomer & Caudle, 2011: p.p.125-126). These factors of leadership, change and success were further indications of the separate developments in research frameworks which did not completely overlap. This separation was demonstrated in the following difficulties in evaluating successful reform. Page | 58

Evaluating the outcomes of thirty years of public sector management reforms has been inconclusive. There were difficulties in defining and attributing the results from implementing reforms (Boyne, 2003). Although evaluation has been repeatedly attempted, actual evaluation of large-scale reform has been rare (Boston, 2000; Breidahl et al., 2017; Moynihan, 2006; O’Flynn, 2015; Pollitt, 1995; Pollitt & Bouckaert, 2011). By contrast, attributing the origins of successful reform outcomes through program logic (figure 7) is a key concept in the evaluation discipline (e.g. Funnell, 2000; Hatry, 2013; Perrin, 2015). Further research has been proposed into the specific contexts of public sector management reforms, but no reference was made to Pettigrew’s framework (Pollitt & Bouckaert, 2011). Establishing the outcomes from implementing management reform is now of increasing interest (Lindquist & Wanna, 2011; Lindquist & Wanna, 2015; O’Flynn, 2015; Pollitt & Dan, 2013; Tiernan, 2015c). This includes the additional factor of whether such management reform becomes embedded and sticks (APSC, 2013f; Ilott et al., 2016; Newcomer & Caudle, 2011; Pal & Clark, 2015). In distinguishing between commencing a reform and its later outcomes, this indicates a research gap being addressed in this study through the factors of later effectiveness over extended time. Whether a reform sticks is an emerging research interest.

3.7 Making Successful Reform Stick An early attempt at assessing reform success sought to integrate implementation theory and evaluation theory. This was based on both measuring and understanding the outcomes from the processes and delivery of a program or reform (Scheirer, 1987), but this finding from the evaluation discipline had limited influence in the subsequent arena of implementation research19. Evaluation’s program logic (figure 7) was applied in other areas associated with program performance, the initial design of program implementation to ensure its later evaluability, or both performance and evaluability (Baehler, 2003; Bickman, 1987; McLaughlin & Jordan, 1999; Rogers, 2008; Scheirer, 2012; Wholey, 1987). Research elsewhere in public administration highlighted the place of key performance indicators in managing outcomes and demonstrating their effectiveness (Smith, 1995). Effectively managing such performance could be demonstrated through evaluation, with performance measurement becoming a key element of modern public sector reform (Finance, 2000; Johnsen, 2005; Wholey, 2001). This connected the initiation of a reform with its active management, through to demonstrating the eventual

19 Of the 135 citations of Scheirer on Google Scholar at 22/1/19, most were in the evaluation discipline. Examples of those journals included: Evaluation and Program Planning; Evaluation Review; American Journal of Evaluation; New Directions for Program Evaluation; Evaluation Practice. Page | 59

and demonstrable impact(s). As Pressman & Wildavsky (1973) demonstrated, eventual outcomes in practice may not necessarily correspond with the high-level intentions of reform.

Those eventual impacts realised might not be the intended ones of management reform. There was a risk of a gap, between top-level political attention (not) being maintained and eventual but slow administrative implementation (Mauri & Muccio, 2012). Designing success in such implementation could be found in four factors from evaluation theory: initial program theory; program logic and evaluability; creating a culture of evaluation in a public sector agency, and planned evaluation (Bickman, 1987; Hanwright & Makinson, 2008; Scheirer, 2012; Wholey, 2001). These factors only appeared later in the linking of implementation, performance management and evaluation (Hatry, 2013). The take-up of evaluation’s program logic in the field of public policy has been slow (Baehler, 2007; Head & Alford, 2015; Isett et al., 2016). The relevance of program logic to the successful implementation of reform follows.

Making a reform stick means embedding it. That can be achieved through “policy paradigms, operational routines, control mechanisms, training manuals, hiring and firing and promotion practices” (Boin & Christensen, 2008: p.279). This framework was enhanced with ten proposed elements of “time; problem dimension; problem scale; losses from reform; policy design; institutional change; compensatory strategies; stickiness; analytical skills; political skills” (Pal & Clark, 2015: p.251). Making reform stick is an emerging factor in implementation research (Hood & Dixon, 2015; Ilott et al., 2016; Lindquist & Wanna, 2011). The elements of time, policy design, change and stickiness reflect research considered in this review, but this research has lacked factors of on-going monitoring and evaluation, to test if this stickiness occurred and for how long. These developments identify gaps in current implementation research, which are reviewed next.

3.8 Current Gaps in Implementation Theory This section explores gaps in current implementation theory. Such gaps are relevant to considering whether a change management framework could help to explain the challenges of implementing policies and reforms that endure over extended time. Early theories of implementing reform omitted the management of those reform changes to their effective and enduring outcomes (e.g. Montjoy & O’Toole, 1979; Sabatier & Mazmanian, 1980; Saetren, 2005; Van Meter & Van Horn, 1975). Although the need for a comprehensive implementation theory was identified over twenty years ago (Matland, 1995), implementation research has Page | 60

become limited in its research base. For example, a potential third generation research paradigm was based mainly on core journals in public policy, political science, public administration and public management (Saetren, 2014). Research findings from other disciplines examined in this chapter are yet to enhance implementation theories.

Seven examples of findings from those other disciplines follow. The seven factors are: active change management (Armenakis et al., 2000; Fernandez & Rainey, 2006; Kotter, 1995), especially leadership of change (Battilana et al., 2010); performance management connected with evaluation (Blalock, 1999; Hatry, 2013; Kroll & Moynihan, 2017); implementation linked with evaluation (DeGroff & Cargo, 2009); accountability of management through evaluation (Perrin, 2015), integrating performance measurement with evaluation (Newcomer & Brass, 2016) and active performance management is related to organisational performance (Gerrish, 2016). Alternatively, further implementation research was necessary to produce “parsimonious theoretical constructs” (Hupe & Saetren, 2015: p.100), as implementation theories were now considered fractured and anecdotal (Howlett, 2018). These conclusions highlight a lack of cross-disciplinary analysis in implementing successful public sector management reform.

The key elements required to implement public sector reform successfully can be identified. The last stage in Kotter’s influential framework of eight positive transforming steps was “institutionalizing new approaches” (Kotter, 1995: p.61). Later development of Kotter’s work has identified that change also required good leadership (Gill, 2002; Stewart & Kringas, 2003). Kotter’s last factor of institutionalising change has been adapted in a framework for the management of successful public sector reform (Fernandez and Rainey, 2006). While being widely and recently-cited20, Kotter’s framework has been criticised for being derived from the private sector and has otherwise not generally been examined more closely for its application to practice in the public sector (Pollack & Pollack, 2015). Elsewhere in evaluation theory, a wider conceptualisation of performance management developed. This was “an organisation’s ability to achieve its goals and objectives measurably, reliably and sustainably through intentional actions” (Hunter & Neilsen, 2013: p.10). By defining the four factors of successful change (act; measure; achieve; sustain), this was a further development of the effective change

20 A Web of Science analysis on 18/7/19 showed most citations of Kotter (1995) are in the fields of Management and Business (= combined 58%), with a minority in the Public Administration discipline (3.6%). Two-thirds (66%) of all those same citations have been since 2011. Page | 61

framework. Through a cross-disciplinary approach to reform implementation, a possible third generation of implementation research is developing, to include extended time.

This factor of extended time can be seen in this proposed connection between outcomes and time. Such time is more noticeable for its absence: “causal analysis of policy processes becomes problematic when there is no time dimension in the data collected” (Hupe & Saetren, 2015: p.99). Such an enhanced implementation framework may close the gap between policy success and failure (McConnell, 2015). It is noticeable that the rarity of evaluating public sector reforms (Breidahl et al., 2017) was a conclusion from evaluation theory and not implementation theory. These gaps suggest there is incomplete maturity in research on implementing public sector management reform.

Such successful management reform involves changes over time. Time is required to tell the performance story (McLaughlin & Jordan, 1999) and establish whether the intended (reform) results are being achieved effectively (Weiss, 1999). Even fifteen years later may have been too soon to measure the effectiveness of public sector management reforms (Boston, 2000). This raised a timing benchmark for managing the changes of reform over the long-term and when to assess success (Funnell, 2000). A wider context of reform success over the long-term can be seen in the separate theories of implementation, change management (Kotter, 1995), organisational change over extended time (Pettigrew, 1990) and evaluation (e.g. Rogers, 2008; Weiss, 199179; Wholey, 2001). Where they overlap, the frameworks of implementation, change management and evaluation may be relevant to the concerns with embedding reform (Newcomer & Caudle, 2011) or making it stick (Lindquist & Wanna, 2011; Pal & Clark, 2015; Ilott et al., 2016). This inserts the place of extended time in the space between commencing the implementation of reform and evaluating the effectiveness of any later embedded outcomes.

Managing change that is effective over time may lead to reform being institutionalised. Kotter’s framework suggests there are eight steps: establish urgency; create a powerful coalition; vision; communicate the vision; empower others; create short-term wins; continue change; institutionalise the new approaches (Kotter, 1995). As discussed in Section 3.2, Kotter’s framework has continued to contribute to public sector change management and leadership research (Fernandez & Rainey, 2006; Kuipers et al., 2014; Shannon, 2016; Stewart Page | 62

& Kringas, 2003; Van der Voet et al., 2016a). This framework has remained influential21 and Kotter’s last stage of institutionalisation has been adapted to the concept of embedding developed in this chapter. In this thesis, embed is used in the public sector context of made permanent, or implemented demonstrably for the long term. (APSC, 2013f; Crawford, et al., 2003; Halligan & Adams, 2004; Podger, 2004, 2018; Pollitt, 2013). In between change management and this embedding is a factor of extended time. From organisation research, organisational change over extended time (Pettigrew, 1990) connects reform changes over time and evaluates their achievements. By including a comparison between management reforms and their eventual impacts, Pettigrew’s framework is consistent with the evaluation discipline in assessing those reform changes.

Evaluation theory has been parallel to but unconnected with implementation research. Relative to reform implementation, evaluation theory provides such insights as: designing evaluability; evaluating reform change; using performance measures in (reform) performance monitoring (Boston, 2000; Funnell, 2000; Mayne, 2001; Pollitt 1995; Wholey, 1987). Different implementation frameworks have been more focussed on the processes of implementation, with the direction of implementation theory being regularly queried (Barrett, S., 2004; deLeon & deLeon, 2002; O’Toole, 2000). Evaluating management reform to gain evidence of its effectiveness could also avoid the danger of public sector reform fatigue, which can occur when successive reforms overtake the incomplete implementation of earlier reforms (Di Francesco, 2000; Radin, 2000; Wanna et al., 2000). This identifies the series over time of never-ending reform (Barrett, 2014; Hood & Lodge, 2007; Jones & Kettl, 2003). By contrast, planning to connect the initial implementation goal with its later realisation, in implementation theory (Hill & Hupe, 2003) was already familiar from the evaluation discipline. There are further insights missing in other implementation frameworks.

Implementation has been framed as a simple dichotomy of input/outcome. That framework compared implementation between the original policy objectives and later achieving those intended objectives (Marsh & McConnell, 2010). This was subject to the caveat of extended time, as success required on-going support from political, non-government and community levels (McConnell, 2010). Further insights have been missing in a review of thirty-five years of implementation research with two conclusions: success was affected by too many variables

21 53% of the references to Kotter (1995) have been in the 6 years since 2013 (Web of Science extract, 18/7/19). Page | 63

in the implementation chain (although not identifying those variables); clear initial goals were needed (Hupe, 2011). Missing from Hupe’s review were these five insights from other disciplines: the connections between public policy and evaluation (Weiss, 1999); evaluating implementation (Leithwood & Montgomery, 1980); the design feature of program logic (Funnell, 2000; McLaughlin & Jordan, 1999; Rogers, 2008); the timing and impacts of analysis over extended time (Pettigrew, 1990, Pollitt, 1995); the design and implementation of successful organisational change (Fernandez & Rainey, 2006; Funnell & Rogers, 2011). By contrasts, a focus on ‘failure’ has continued, where individual examples of failure could provide a better understanding of the implementation interactions between policy, politics, process and programmes (McConnell, 2015). Seeking the results of public sector reform through evaluation also involves managing change over the long-term and linking policy design, initiation, implementation, change over extended time and evaluating successful implementation. These findings are now presented for their relevance to current APS practice.

3.9 Relevance of Gaps in Theory to Current APS Practice The practice of APS management reform has resulted in its own gaps. Early claims of success in implementing the 1980s APS reform of MfR (Ives, 1994; Keating, 1990; Sedgwick, 1994; TFMI, 1992; Yeatman, 1987) contrasted with three later reservations. The first was the difficulty of assigning managerial responsibility for those results (Stewart & Kimber, 1996). The second indicated management reforms may only be a rolling series of unfinished inputs (Prasser, 2004). The third reservation reflects this research interest of extended time (Pettigrew, 1990), in noting that the claims of embedding in APS practice were made only some five to ten years after initiating the reform of MfR (TFMI, 1992). The need for management reform has not been linked with any later evaluation of its success (e.g. Baehler, 2003; Barrett, P., 2004a; Guthrie et al., 2003; Johnston, 2000; Lindquist, 2010; Moran, 2013). This highlights a disconnect between reform initiation and any later, demonstrable success.

By contrast, claims of success in APS management reform are subject to conflicts of interest. Those claims are notable for being made by the APS promoters of these reforms in central agencies (Ives, 1994; Keating, 1990; Moran, 2013). This was exemplified in the evaluation of that MfR reform, which was conducted internally by APS officials and then reported to six of the same Secretaries on the Management Advisory Board, whose earlier implementation of that reform had been evaluated (TFMI, 1992). Evaluations of reform outcomes have been recommended in practice (Dawkins, 1985) and in theory (Fernandez & Rainey, 2006), but Page | 64

became secondary afterthoughts in the implementation processes (Halligan, 2007). This suggests initiating a reform is more important than steering it to finality, as embedding a management reform can require extensive periods of time.

Extended time needed to assess reform success has not been defined. To demonstrate any eventual impacts can require ten to twenty years (Bovaird & Russell, 2007) or up to thirty (Craft & Halligan, 2017). This can also be continually assessed, with on-going performance information in management information systems (Bozeman & Bretschneider, 1986; Finance, 2000; Moynihan et al., 2012; Taylor, 2009). These periods of ten to thirty years complement Pettigrew’s focus on organisational change over extended time and relate to evaluation’s program logic, with its short, medium and long-term outcomes (Figure 7). Evaluation theory’s program logic can bridge the gap between reform initiation and outcome (Baehler, 2007; Head & Alford, 2015; Ryan, 2004). Program logic establishes success factors in the initial design stage of an implementation framework that also includes the long-term. In 2003, some central APS influence was exerted on agencies’ implementation of Cabinet decisions over time.

This was the creation of the Cabinet Implementation Unit in the Prime Minister’s Department. It was suggested this would lead to improved policy implementation across the APS (Tiernan, 2007). At the same time, more active agency management by APS heads was expected to include ongoing monitoring and periodic evaluation of their programs (Briggs, 2007a). A shift in Australian research terminology could also be observed, from the process of “Improving Implementation” (Wanna, ed, 2007) to the focus on outcomes of “Anchoring Significant Reforms…” (Lindquist et al., 2011). Both the design and consolidation of reform should be of equal concern (t’Hart, 2011). This began to link the initiation of management reform with evaluating whether they achieve their intended impacts in long-term practice. Factors absent in these developments were the agents of such evaluations and achievements.

Absent in APS practice has been evaluating later reform achievements. Two evaluations of those MfR reforms were conducted by the national Audit Office: program evaluation (ANAO,1997) and annual performance reporting (ANAO, 2003). However, those ANAO reviews are not necessarily implemented by agencies and may not have impact on changing APS practices (ANAO, 2013a; 2019). This identifies a current state of the APS: the lack of central impact on APS agencies and their accountability. Declines in APS staff capabilities have included the skills of “policy,...accountability, ..[and] capacity for long-term thinking” Page | 65

(APSC, 2013f; Tiernan, 2015b: p.57). The reform series and whether APS reform sticks has become a research interest (Head & O’Flynn, 2015; Lindquist & Wanna, 2011, 2015; Rhodes, 2016). A single reform may be commenced, not finalised and then be overtaken by later reform.

Implementing a single reform over the long-term of decades across the entire APS remains a challenge. This is despite a current priority for all levels of APS management being to manage constant change: “Regardless of how change might be defined or described, the primary task for APS leaders and managers is to coherently manage organisational change. Change is an ever-present feature of organisational life, and the ability to manage change is a core skill” (APSC, 2014a: p.87). However, absent in this primary task is the framework for demonstrating the achievements from managing that associated change, which has been under-researched (O’Flynn, 2015) and is the focus of this thesis about implementation theory.

Implementation theory continues to be relevant to the practice of APS reform. New requirements for agency performance were mandated by the Public Governance, Performance and Accountability Act 201322: the PGPA Act (Finance, 2017e). This includes measuring the effectiveness of APS non-financial performance (s.38) and improved accountability (s.39): 39 Annual performance statements for Commonwealth entities (1) The accountable authority of a Commonwealth entity must: (a) prepare annual performance statements for the entity as soon as practicable after the end of each reporting period for the entity; and (b) include a copy of the annual performance statements in the entity’s annual report that is tabled in the Parliament. As demonstrated in figure 5 of chapter 2, this requirement for performance statements is similar to the ‘Managing for Results’ reform. There is both political and organisational amnesia (Tingle, 2015; Wettenhall, 2011) evident in the following use of ‘for the first time’ by the Assistant Finance Minister introducing the PGPA Bill in 2013.

The Minister’s Second Reading Speech introduced the Bill, with requirements for performance monitoring. Minister Bradbury explained “The PGPA Bill seeks to link the key elements of resource management to establish a clear operational cycle of planning, measuring, evaluating and reporting results to parliament, ministers and the public. In particular, the Bill will, for the first time, [emphasis added] explicitly recognise the value-add of rigorous planning,

22 At https://www.legislation.gov.au/Details/C2013A00123 Page | 66

performance monitoring and evaluation that goes beyond financial reporting. The bill would achieve this by, among other things, requiring entities to develop a corporate plan to monitor, assess and report performance to a set of transparent standards” (Bradbury, 2013). That statement demonstrates two factors in practice: corporate amnesia and the reform series.

Both of these factors are relevant to this research on embedding management reform. Individually or together, organisational amnesia and the reform series (Rhodes, 2016; Wettenhall, 2013) can disrupt the long-term implementation of any one reform. The reform series has turned again on APS practice, again featuring planning, measuring, evaluation and establishing results. Factors include the needs for cooperation across jurisdictions and successful, effective implementation, through the initial use of program logic and the later application of evaluation (Head & Alford, 2015). There were no conclusions or definitions of implementation success and these assessments did not draw upon the earlier framework for successful organisational change (Marsh & McConnell, 2010). These gaps indicate a plethora of problems inherent in the management of reform implementation, but a lack of applying any integrated research on implementation, change management and evaluation in a whole-of-APS framework (e.g. Blackman et al., 2013). These conclusions highlight a research-practice gap between implementation theory and APS reform practice.

Past APS reform practice has not led to embedded reform changes. Since the 1976 Royal Commission, policy implementation and APS management reforms have resulted in much research23, with recurring themes of leadership, performance, reform success and effectiveness. These have been comprehensively examined in both theory or practice, but in current practice the APS lacks four significant capabilities. These are in the long-term capacity of policy formulation, implementation, embedding results and creating a culture of high-performance (APSC, 2013f; Hawke, 2012; Head & O’Flynn, 2015; O’Flynn, 2011). The reasons for these absent skills remain un-identified and have not been linked to any outcomes of previous management reforms. Some research has included the factor of transformational leadership by change agents (Althaus & Wanna, 2008; Ryan et al., 2008), but this depended on the presence of those leaders as change agents. Research to date has not considered if reform would continue over the long-term, in the absence of those agents or changed staff.

23 Examples include: Althaus (2011), Baehler, (2003), Bartos (2003), Davis & Bisman, (2015), di Francesco, (2000); Dixon (1996), Guthrie et al., (2003), Halligan (2007), Head & Alford (2015), Johnston (1998), Lee (2008), Lindquist, (2010), Lindquist & Wanna (2015), Mulgan (2010), O’Flynn (2015). Page | 67

A relevant lesson about absent leaders of United Kingdom public sector reform is yet to appear in Australian research. Capability Reviews of United Kingdom Departments did not continue after its initiator left office (Panchamia & Thomas, 2014). This illustrates the significance of reform being initiated by a top-level change agent but not continuing in the subsequent absence of that person. This identifies a potential gap between the initial change agent commencing public sector management reform and the later management and embedding of those changes. Managing change is a part of a high performing Australian public service (Blackman et al., 2013). This recognises that the current priorities for the APS are constant change and staff being able to adapt, coupled with long-term goals leading to long-term agency performance. By outlining the competencies for leadership, management and staff, that framework emphasises the complexities of the separate levels of public sector performance, especially in providing feedback with performance information on those long-term agency objectives. This also identifies the importance of extended time in an implementation and evaluation framework.

This chapter drew upon theories of implementation, change management, organisation research, and evaluation to examine research into the practice of APS reform. Each body of research contributes separate factors of reform commencement, organisational change over extended times of decades, and performance information about later effectiveness. These factors appear yet to be integrated into a larger framework of achieving effective reform. These insights into understanding and embedding reform of the APS will now be presented in the following sections. They demonstrate the relevance of this research and provide the outline of this framework, through the following insights.

3.10 Insights from this Review This review has suggested that implementation theories can be placed into four wider contexts. For their potential relevance to enhancing those theories of implementation, they are (1) the design of management reform before commencing its implementation, (2) managing the organisational change resulting from implementing that reform, (3) the extended time needed to embed the changes from that reform and (4) evaluating whether those changes were effective. Re-examining the current conceptualisations of reform, change management and reform effectiveness may complement current theories of implementation and the assessment of management reform success. These re-examinations follow.

Page | 68

Four research fields potentially contribute to implementation theory. They are: goal achievement (Scheirer, 1987); implementation over extended time (Pettigrew, 1990); effective change (Fernandez & Rainey, 2006), and demonstrating reform performance by evaluation (Rogers, 2008). A theory-practice gap has been identified by Kuipers et al. (2014), with more practice-based study of the outcomes of successful change being required. Change management research is contributing to re-focussing implementation (Shannon, 2017). Some connections are emerging, including the relationship between leadership and implementing change (Kuipers et al., 2014). The acceptance of change by public sector staff was also dependent on the styles of their direct supervisors, not exclusively nor necessarily dependent on the agency’s senior leadership (van der Voet, et al., 2016a). This echoes one lesson from the evaluation of the APS MfR reform: that change needed to be carried right through the organisation and its staff, rather than being solely initiated from the top (TFMI, 1992). This establishes the significance of both extended time and the impact of mid-level managers of all staff, in carrying out effective change.

This review established there is no single framework for implementing public sector reform long enough to be able to demonstrate effectiveness in achieving embedded results. It identified a developing interest in refocussing the reform lens from implementation to change management (Briggs, 2007a; Funnell & Rogers, 2011; Gill, 2002; Lindquist & Wanna, 2015; Pollack & Pollack, 2015; Shannon, 2017; Stewart & Kringas, 2003; Wanna (Ed), 2007). This refocussing would partially extend implementation theory by change management theory, athough the challenge of ensuring reforms endure over an extended time period remains. In APS practice, repeated management reforms have been labelled a syndrome (Rhodes, 2016) and an industry potentially resulting in unstable agencies (Wettenhall, 2013). Otherwise, policy research has not produced comprehensive frameworks of success (Marsh & McConnell, 2010). Evaluating the success of public sector reforms has been consistently difficult, if undertaken at all (Breidahl et al., 2017; O’Flynn, 2015; Pollitt & Bouckaert, 2003, 2011). This chapter has reviewed the factors in an implementation theory incorporating extended time, partially by incorporating the factors of the public policy cycle.

The five stages of the public policy cycle include both implementation and evaluation. They are: “agenda setting; policy formulation; decision-making; policy implementation; policy evaluation” (Howlett & Cashore, 2014: p.23). By comparison, implementation research has not identified a comparable framework for evaluating successful management reform, when Page | 69

examined over extended periods of time such as twenty to thirty years. This chapter has outlined some of those dynamic processes of reform and the means of evaluating their achievement, but only rarely is management reform evaluated. The last reform of APS practice to be evaluated occurred in 1992, with claims of success in that MfR implementation being made by both researchers and practitioners (Pollitt & Bouckaert, 2011; TFMI, 1992. By contrast, the absence over the long-term of demonstrating the success of management reform is only now being recognised (O’Flynn, 2015; Briedahl et al., 2017:226). This review established that APS reforms since the 1976 RCAGA proceeded through series that were not always managed effectively to their demonstrable finality, nor were formally evaluated and success was not professionally established.

The size and dispersion of the APS are factors in the implementation of reform. Since 1992, there has been a lack of review of each reform and its outcomes, which would have provided some evidence (or the lack of it) for the next in the APS reform series, now apparent in the requirements of the PGPA Act. This review has identified gaps between public sector practice and the following research disciplines: change management (Kuipers, et al., 2014); policy and managing its implementation (Lindquist, 2010; McConnell, 2015; O’Flynn, 2011; O’Toole, 2004; Wanna et al., 2000); public administration, more generally (Head, 2015; Peters & Pierre, 2017; Radin, 2013); reform intention versus evaluated outcomes (Marsh & McConnell, 2010; Mauri & Muccio, 2012; Sabatier, 1986). A model of implementing management reform which becomes demonstrably embedded remains to be researched.

This review identified other disciplines relevant to successfully implementing management reform. These disciplines were the bases for examining three factors relevant to implementation theory and practice: the design of policy reform; the management of change resulting from that implementation; plus the joint factors of extended time for implementation and embedding. First, this review considered existing theories of management reform and implementation. Second, it reviewed theories of change management and longitudinal organisation research, to identify the key issues of change over extended time. Third, it assessed their relevance to APS management reforms. Fourth, it examined key conclusions in the literature about management reform success or failure. Fifth, it reviewed the currently-separate components of successful reform implementation, for their applicability to the Australian context. Sixth, perceived gaps in research and practice have been highlighted. This review established the many issues relevant to implementation and the change management involved. Page | 70

These were reform initiation, effective senior management, staff involvement in the reform processes, sufficient time for demonstrating reform impact, later evaluation of those impacts. These issues are considered next.

A key underlying issue has been identifying success, as defined in different disciplines. This integrated literature review brought out four currently separate factors for successful reform: planned design of that policy; transformational senior leadership as change agents; agreed links between objectives and expected outcomes; rigorous evaluation over extended time of reform outcomes. By introducing the place of extended time from organisation research (Pettigrew, 1990), this review has contributed a relevant factor in public sector implementation theory. A widely-quoted theory of change management (Kotter, 1995) has been derived from the private sector and has not had extensive impact in the broader literature on the public sector. Further more-specific insights follow.

The importance of extended time was lacking in existing models of implementation and change, which would link the reform intentions and establish their impacts over that time. Insights derived from Pettigrew’s framework for research over extended time on change could help to bridge gaps between research disciplines (Andrews & Esteve, 2015; DeGroff & Cargo, 2009; Skinner, 2004; Weiss, 1999). A key finding was to link “content, contexts and processes of change over time to explain the differential achievement of change objectives” (Pettigrew, 1990:268). Current theories of implementation and public sector change acknowledge the need to embed, but include neither extended time nor planned evaluation in their frameworks.

This thesis is relevant both to APS practice and implementation research. Neither field has identified a framework for evaluating successful management reform, when examined over extended times such as twenty to thirty years. There is developing interest in this lack of theory connecting reform implementation and the eventual outcomes of that reform. This review identified that the significant factors of embedding and evaluation have begun to enter implementation research and practice, while also outlining some of the dynamic processes of reform and the means of evaluating their achievement.

The APS reform series has been identified in research, but the underlying reasons for that series and its repetitions have not been examined. This absence frames the place of embed and evaluating reform effectiveness in this thesis. A rare consideration of the evaluator’s dilemma Page | 71

of attribution (Bovaird, 2014; Funnell, 2000; Hatry, 2013; Pollitt & Bouckaert, 2011) concluded that establishing cause and effect models was a key element of performance management and assessment, concluding this “has often been slipshod and inappropriate” (Bovaird, 2014: p.19). For example, evidence from current Australian inter-governmental health policy revealed there was no national implementation plan by the APS lead agency and low priority given to evaluation across all involved jurisdictions (Hughes et al., 2015). This example supports concerns about current APS capabilities in implementation and evaluation.

Current APS management reform emphasises demonstrating non-financial program performance (Barrett, 2014; Barrett 2017; Finance, 2014; Tiernan, 2015c), which raises two relevant issues. The reforms (re)emphasise delivering outcomes and effective service delivery, implicitly acknowledging the absence of models for these skills and practices across the current implementation processes. Head (2015) has identified a gap between what the public sector does and what gets researched, where “many of the most interesting ideas in public administration have come from the world of practice, and researchers must be able to learn from them” (Radin, 2013: p.6). This practice-based review of the APS policy reform series may assist in enhancing implementation theory. In summary, the significances of embedding and evaluation have now entered both research and practice: “Evaluation is no longer seen as the responsibility of a few technical experts, or as an afterthought to the ‘real’ work. Evidence- informed decision making, and evaluative thinking, need to be embedded in the ways that organisations and people work” (ANZSOG, 2017). They are factors in this study.

This review examined existing theories of reform implementation in public administration and introduced three further disciplines of change management theory, organisation research and evaluation theory. This established factors of managing the public sector management changes sought through reform and the consequent extended timing needed for measuring the impacts and effectiveness of that reform. There were theory-practice gaps identified in each discipline, relating to embedding and measuring success, while there appeared to be little impact from the conclusions either in practice or across those disciplines. These gaps indicate there is incomplete maturity in implementation research and some insufficiency in its frameworks.

This review drew upon both APS practice and the above four research disciplines and consequently has framed the following four questions: (1) whether this management reform of program evaluation become embedded in the APS? (2) what is the role of public sector change Page | 72 agents in embedding APS management reform? (3) how can change management frameworks explain the challenges of implementing APS reform policies? (4) what insights might be learnt from applying a lens of extended time to implementation theory and reforms that endure? Chapter 4 reframes implementation theory, by examining one APS reform and identifying potential additional features from the above three additional disciplines. It includes the place of extended time in considering such an enhanced framework for understanding embedded APS reform.

Page | 73

Chapter 4. Re-Framing Implementation Theory Reform processes are change processes that need to be managed, not only in terms of implementation but also to realize the goals (which may not be shared by all actors and/or may change during the process) and to prevent undesired effects for some actors. Thus, attention to theories and strategies of change and practices of monitoring, shaping and adapting strategies and directing actual or desired communication processes is essential (van der Meer, et al., 2016: p.275).

4.1 Introduction The central research question asks whether the management reform of program evaluation became embedded in the APS. As evidenced in the above quote, implementing such reform requires additional factors beyond its initiation, including change agents, change management over time and preventing ‘undesired effects’. Chapter 3 reviewed existing theories of reform implementation in public administration, by introducing three additional fields of change management, organisation research and evaluation. Theory-practice gaps identified in each discipline related to additional factors of extended time, embedding and measuring success. This chapter reframes implementation theory, by examining in further detail those factors from change management, organisation research and evaluation. Public sector management reform has been defined as “deliberate changes to the structures and processes of public sector organisations with the objective of getting them (in some sense) to run better” (Pollitt & Bouckaert, 2011: p.2). That contains two factors: change; for the better. By contrast, the comment above contains three factors: active change management; goals which may shift during implementation; preventing undesired outcomes. These differences mean that this study also asks three supplementary questions.

The first is: what is the role of public sector change agents in embedding management reform? This examines the use of APS change agents and any links with the long-term outcomes of management reform and generates the second question. How can change management frameworks explain the challenges of implementing management reform policies? As management reform is intended to realise goals of changed practice, that raises an issue of time to be discussed: the extended length of time required to achieve those goals. This results in the third question: what insights might be learnt from applying a lens of extended time to implementation theory and reforms that endure? Implementing the changes of management reform and measuring their effectiveness mean existing implementation theory may benefit through insights from change management, organisation research and evaluation theories. This chapter identifies their contributions to the findings in chapters six, seven and eight. Page | 74

Implementing management reform means changing administrative practices (Pollitt & Bouckaert, 2011; van der Meer et al., 2016). That link is currently under-developed, but subject to some re-assessment as to the means of reviewing reform success or failure (Shannon, 2017). The gaps above indicate there is incomplete maturity in implementation research and some insufficiency in its frameworks. This chapter begins by explaining the choice of four theories, defines the key concepts of this study, then examines the current relationships between these factors. These factors have not been integrated into a theoretical framework of leadership for achieving demonstrable reform and change that is effective and becomes embedded in agency practice (Kuipers, et al., 2014). Setting out the structure for this re-framing helps clarify an assessment of the factors associated with potentially embedding APS management reform.

4.2 Re-Framing Implementation Theory through APS Practice This thesis challenges assumptions that an APS Secretary can embed management reform. APS practice in implementing management reform is top-down, with Secretaries required “to take responsibility for the stewardship of the APS and for developing and implementing strategies to improve the APS”24. A past assumption has been that APS Secretaries can implement, manage and achieve the long-term changes of APS management reform by directive (e.g. Grube, 2011; Halligan, 2018; Weller, 2001). As discussed in chapter 3, implementing public sector management reform requires extended time to change practices by management and staff and is difficult to assess as an on-going reform, if even evaluated at all (Barrett, S., 2004; Boston, 2000; Breidahl et al., 2017; Craft & Halligan, 2017; O’Flynn, 2015; Pollitt & Bouckaert, 2003). This study also challenges the completion which is implicit in any practice-based conclusions of ‘implemented’.

Accordingly, this chapter reviews the gap between implementing and embedding. That gap is between starting a reform and its becoming embedded; meaning that management reform has continued long enough to lead to a permanent change of practice (MAB, 1993; MAC, 2010). This gap can be seen in past failures of reform practice (Hawke, 2012; Ilott et al., 2016; Johnsen, 2005; Lindquist & Wanna, 2015; O’Flynn, 2011; Pal & Clark, 2015; Pollitt, 2013; Pollitt & Dan, 2013; Rhodes, 2016; Shergold, 2015). The failures have been noted but their systemic reasons not researched, suggesting that current implementation frameworks do not

24 Under S.64(3)(a) of the Public Service Act 1999. At http://classic.austlii.edu.au/au/legis/cth/consol_act/psa1999152/s64.html Page | 75

result in long term, effective reforms in APS management practice.

This is a case study of implementing program evaluation in the 1984 APS reform of ‘managing for results’. It serves two purposes: as an example of an APS management reform that came and went; then to frame the examination of embedding. The initial top-down implementation theory (Pressman & Wildavsky, 1973; Smith, 1973; Van Meter & Van Horn, 1975) and its alternative of bottom-up theory (Elmore, 1979; Lipsky, 1980) resulted in a lack of comprehensive implementation theory (Matland, 1995) and there is potential for re-framing current implementation theories into a third generation (Saetren, 2014). Factors from three other disciplines are examined for their relevance to enhancing implementation frameworks.

Existing implementation frameworks are reviewed by adapting Pettigrew’s “Longitudinal Field Research on Change” (1990). This framework recommends that organisational change and its initial substance are studied in a context of connected levels of analysis, are reviewed over extended time and then are analysed later for the impacts of those changes, which may differ from the original objectives. Consequently, Pettigrew’s six-part framework of “(1) content, (2) contexts, and (3) processes of change (4) over time (5) to explain the (6) differential achievement of change objectives” (Pettigrew, 1990: p.268) may be useful in being applied by reform actors to achieve reform outcomes. From that field of organisation theory, Pettigrew’s theory emphasises a factor of extended time that may complement current implementation theories. Together with theories of change management and evaluation, these three disciplines are the bases for this potential re-framing of existing implementation frameworks.

4.3 Existing Implementation Frameworks Early implementation theory contained an implicit factor of geography. That factor was the distance in effective management influence between the national government at the centre and its geographic periphery: the place where the outcomes of reform were delivered. That geographic factor was implicit in examining the gap in reform outcomes: between those expected by bureaucrats in Washington DC versus those actually occurring in Oakland, California, a distance of 4,500 kilometres25 (Pressman & Wildavsky, 1973). The need for verifiable links between central government policy decisions and their outcomes became a factor enhancing implementation theory (Van Meter & Van Horn, 1975). This top-down

25 From https://www.travelmath.com/drive-distance/from/Washington,+DC/to/Oakland,+CA (accessed 29/1/19) Page | 76

framework emphasised that implementation was intended to meet the objectives of the centre (Hill & Hupe, 2003; Hupe & Saetren, 2015; O’Toole, 2000; Sabatier & Mazmanian, 1979; Saetren, 2005). That geographic periphery became a second-generation of bottom-up theory: recording the expected behaviours and outcomes at that implementation front-end by ‘street- level bureaucracy’ (Elmore, 1979; Lipsky, 1980/2010). By highlighting front-end reform outcomes, Elmore and Lipsky expanded the meaning of 0.implementation beyond a straight- line process of directives down from the central policy-makers. There remained a difference in meaning between intention and outcome, as to whether that resulted in successful reform.

A factor relevant to public sector reform practice is implementation success. This meant achieving the original policy/reform objectives (Matland, 1995; O’Toole, 2004). Despite the thirty years since Van Meter & Van Horn (1975) and the competing top-down and bottom-up theories, absent was a “well developed theory of policy implementation” (Saetren, 2005: p.573). Further research has not resulted in a third generation of implementation theory leading to reform which is effective in achieving intended outcomes. (Hupe, 2014). Factors absent included the nature of those links between actions and achievements, how the links would be made and by whom, coupled with the place of extended time. These absences were reflected in APS practice.

In APS practice, policy formulation had not been linked with its subsequent administration. Implementation practice had often been overlooked (Halligan 2007). General claims of public service success were rarely justifiable, as success was often asserted and not demonstrated by agreed, pre-set criteria (Boyne, 2003; Marsh & McConnell, 2010). A definition of successful implementation has emerged, as being implemented in accordance with the original policy objectives and achieving the intended outcomes, although being qualified by the temporal factor of short and long-term timeframes in assessing those outcomes (Marsh & McConnell, 2010). This framework addressed two implementation questions: who would achieve those outcomes and what would success look like over time? Their conclusions about a ‘framework for establishing policy success’ (Marsh & McConnell, 2010) contrasted with the later claim that no definitive meanings of success had emerged from the discipline of implementation research (Lindquist & Wanna, 2015). That latter conclusion assessed implementation theory from a restricted single framework and illustrated the value of this cross-disciplinary case study. By drawing upon evaluation theory, the place of success can be identified.

Page | 77

4.4 Evaluation Framework and Success Reform success can be derived from evaluating its later outcomes. The structured links between the intention of the management reform by the public sector centre, its subsequent managed implementation and later assessments of reform effectiveness can be found in evaluation theory. An example is the program logic of figure 7 in chapter 3. The second-generation of implementation theory (Elmore, 1979) contained unacknowledged elements from evaluation and its associated element of program evaluability (Poland, 1974). Evaluability means that the intended outcomes and stages of implementation are identified in the initial design of a management reform. This framework links reform initiation and its intended changes so that they are agreed by all the implementing agents, especially those at the geographic periphery. Evaluability is designed through the initial preparation of that reform’s program logic (Baehler, 2007; Funnell & Rogers, 2011; McLaughlin & Jordan, 1999; Rogers, 2008; Wholey, 1987). Using program logic in reform implementation ensures the intended outcomes of management reform are designed into the implementation frameworks.

Program logic enhances the ability of public sector managers to evaluate reform outcomes. This is because the structure of program logic facilitates analysis of those later reform outcomes. By emphasising implementation is not a set and forget exercise, program logic had some impact on implementation research (e.g. Baehler, 2003; Horst et al., 1974; McLaughlin & Jordan, 1999; Rogers, 2008). Program logic is designed to link the initial policy-makers with managers and their staff as later agents of change and makes possible later assessments of effectiveness. By referring only to the non-specific, subjective ‘short’, ‘medium’, ‘long-term’, the program logic framework does not define that timing. By adapting Pettigrew’s use of ‘longitudinal’ as ‘extended time’, the relevance of time in implementation theory is examined next.

4.5 Factoring Extended Time into Implementation Success. There has been a lack of agreed definitions of the time needed to assess reform success. Undefined ‘time’ was one element of an early implementation theory with these five factors. These were: sound objectives, where their achievement can be measured; unambiguous policy directives, understood by all participants and target groups; leaders with both managerial and political skill, who remain committed to the original goals; support by the chief executive and groups external to the public sector; reform objectives and associated priorities not eroding over time (Sabatier & Mazmanian, 1979). This introduced the place of extended time, through that factor of reform priorities not eroding over time. Page | 78

Implementation times have varied, even when identified. Three to five years are too short, potentially needing to ten to twenty years (Sabatier, 1986). Other times have included five to ten years (Kotter, 1995), twenty years (Bovaird & Russell, 2007), thirty years (Hood & Dixon, 2016), or those undefined elements of “short, medium and long-term” (Marsh & McConnell, 2010:580). In APS practice, the 1980s reform of program evaluation was claimed to have been “incorporated into the culture and rhetoric of APS management” (TFMI, 1992: p.379) after four years, despite further work being required to embed this practice (MAB, 1993). This established tensions in meaning between ‘incorporate’ and ‘embed’, as evaluation was being practised (‘incorporated’) but not yet ‘embedded’. This was demonstrated four years later, when evaluation was no longer required in the APS in 1997 by changes in Government and Finance Secretary (Mackay, 2011). Despite those 1992 claims that evaluation was part of APS practice, the priority given by the centre to this practice disappeared after a short period of four years. This challenged practitioner claims (Keating, 1990; Sedgwick, 1994) of reform success.

Measurable reform success can be related to implementation priorities which do not erode over extended time. This also emphasises the importance of maintaining the momentum of a reform over that extended time (Bovaird & Russell, 2007; Morrison, 2014). The place of extended time may enhance implementation theory beyond a reform being started, by including the assessment of the outcomes much later (as discussed in the next paragraph). Currently, there is a mismatch in implementation theory, between reform commencement and actual impacts over time, especially because of the infrequent evaluations of the successes from any single reform (Briedahl et al., 2017). The above inconsistencies in timing mean there are incomplete implementation frameworks for assessing any later impacts of management reform and their effectiveness. A consideration of change over time follows.

4.6 Extended Time, Change Management and Implementation Theory. Additional factors in implementing management reform are time, change and success. The following six factors form a framework of extended time, which links “the content, contexts, and processes of change over time to explain the differential achievement of change objectives” (Pettigrew, 1990: p.268). Drawing on this factor of change over time contributes to a potential third-generation theory of implementation, by bridging a gap between the initiation of reform’s changes, linking the processes of their subsequent implementation and finally analysing their eventual outcomes (‘explain the differential achievement’). It was noticeable that introducing the factor of achieving the original policy/reform objectives did not lead to a third generation Page | 79

of implementation success (Hupe, 2014; Matland, 1995; O’0Toole, 2004; Saetren, 2005). Later, Saetren (2014) was to conclude that implementation research should be conducted over a timeframe of at least five to ten years. This still left a gap between initiation and demonstrable achievement in implementation theory.

Implementation theory has lacked a focus on outcomes. Only recently did Lindquist & Wanna (2015) suggest shifting research from implementation to delivering sustainable policy reform. A key distillation of research would be “a visual organising framework that identifies all of the key variables and considerations broached in the literature review” (Lindquist & Wanna, 2015: p.235). However, neither Saetren (2014) nor Lindquist & Wanna (2015) drew upon the parallel frameworks of either Pettigrew (1990) or Marsh & McConnell (2010), or the long-standing program logic of evaluation (Baehler, 2007; Wholey, 1987; Wolman, 1981). This highlights the narrowness of research in implementation theory based on that single discipline.

By utilising additional theories, two conclusions are relevant to public sector management reform. First, it is important to identify the eventual outcomes of such reform, together with the place of change agents making these reforms happen. This can undertaken by evaluation over extended time of those reform outcomes (Marsh & McConnell, 2010; Pettigrew, 1990). Second, there is a danger of declaring achievement too soon. Change needs to be institutionalised and made permanent, to ensure the changes of public sector reform are embedded as long-term Government practice which sticks (APSC, 2013a; Fernandez & Rainey, 2006; Ilott et al., 2016; Kotter, 1995; Lindquist & Wanna, 2011; MAB, 1993; Pal & Clark, 2015). Kotter’s factor of institutionalising is one of the bases for this research into effectiveness and embedding an APS management reform, although a drawback of most change management research is its derivation from the private sector (Lindquist & Wanna, 2015). The need for ‘institutionalising’ public sector reform provides a link between an APS Secretary’s commencement of a management reform and the structural and management changes then required throughout the devolved APS and its geographically-distributed staff. Such embedding in permanent practice does not automatically result from leading the changes of such reform.

4.7 Leading the Changes of Management Reform Initiating reform from the top requires further management of its changes over time. Both that initial leadership and later active management are required to make successful reform changes Page | 80

(Fernandez & Rainey, 2006; Gill, 2002; Jones & Kettle, 2003; Kotter, 1995; Matthews et al., 2011; Pettigrew, 1990). Insights into the management of that implementation may be gained from Pettigrew’s framework (1990), which sets out those next steps after beginning a reform. This includes maintaining the momentum of change and evaluating whether the changes were successful in achieving the reform objectives (Bovaird & Russell, 2007; Morrison, 2014). Continuing commitment from the top is essential, as reform momentum can wane after turn- over at the top during a reform (Stewart & Kringas, 2003). Stewart & Kringas did not identify any optimum time for occupancy of that leadership in ensuring this successful change.

A framework for leading effective public sector change is a work in progress. A recent review of change management in the public sector concluded the features of effective public sector leadership had not been identified, nor had whether the change processes had achieved the intended outcomes. It identified a gap between change practitioners and researchers and concluded still further empirical research was needed into reform successes, especially with a focus of extended time (Kuipers et al., 2014). An emerging factor concerns who exercises those change management skills over extended time, as APS leaders (Secretaries) have relatively short-term tenures of up to five years (Halligan, 2013). This contrasts with previous extended time factors identified above, of twenty to thirty years for implementing successful reform.

The short-term tenures of APS Secretaries contrast with the extended times needed for successful management reform. This becomes relevant to the current implementation of the PGPA Act, where senior APS leaders are expected to be engaged and support these non- financial performance frameworks (Finance, 2016). If the presence and personal capabilities of those Secretaries as senior change agents are so significant, an unexamined factor has been what happens to the continuity of implementing a reform and its momentum among the associated staff, when those change agents move on. This may result in incomplete implementation and reforms not being institutionalised in long-term practice by all agency staff. This later absence of those senior change agents suggests a gap in change management research.

Kotter’s change management framework outlines the connections between time and embedding change. The last stage emphasises the factor of extended time in pursuing and institutionalising comprehensive change: the ‘embedding’ of this case study (Kotter (1995). An incomplete development of Kotter (1995) in the public sector noted that organisational staff must take up the reforms into regular practice, through a framework that addresses “the Page | 81 relationship between the content and process of change and such organizational outcomes as performance” (Fernandez & Rainey, 2006: p.173). The means of pursuing such changes and organisational outcomes in the large and diverse agencies of the APS so that they become permanent have been under-researched. For example, on-going evaluation of any particular management reform for evidence of its impact and effectiveness is noticeable for its absence, in both theory and practice (Breidahl et al., 2017; O’Flynn, 2015; Stack et al., 2018). Factors which contribute to achieving effective management reform are examined next.

4.8 Factors in Models of Effective Management Reform. Currently there is not a single framework for implementing and embedding effective public sector reform. Taken together, the research discussed in this thesis has revealed the following four separate factors: the extended time applied to that implementation (Pettigrew, 1990); maintaining the momentum of reform change, which is “the energy associated with pursuing a new trajectory” (Jansen, 2004: p.277); not allowing that momentum to decelerate (Barker et. al., 2018) and ensuring those changes are embedded (MAB, 1993; Lindquist & Wanna, 2015). Three factors following can disrupt the momentum of a particular reform change: if progress is not regularly reported; if the initial change leaders are replaced; or if priorities of that reform are shortly afterwards overtaken by later ones (Hood & Lodge, 2007; Lindquist, 2010; Kamener, 2017; Lindquist & Wanna, 2015; Pollitt, 2013; Sabatier, 1986; Stewart & Kringas, 2003; Wettenhall, 2013). This has been exemplified in past APS practice, when the reform of program evaluation was no longer required in 1997 by a new Secretary of the Finance Department, who focussed instead on the introduction of the next reform: accrual accounting (Mackay, 2011). A new change management model could shift from change being led by the heroic individual to change leadership distributed throughout the organisation (By et al., 2016). Past APS reform practice has demonstrated the lack of distributed change management, the reform series and the ease of displacing reform priorities through changes at the top.

Because of this displacement, the claim by a committee of senior APS leaders that program evaluation had been embedded in APS culture was premature (Sedgwick, 1994; TFMI, 1992). This raises the issues of how implementing effective management reform can be assessed and its success defined. By drawing on evaluation theory in assessing this effectiveness, an enhancement of implementation theory may be possible. Using program logic to design a reform and its implementation results in a flow diagram of the activities and changes, over three time periods of short, medium and long-term (Figure 7 in Chapter 3). Such a diagram Page | 82

sets out the shared meanings for all the participants in that reform, about the planned (reform) impacts and their expected outcomes. A map for planning each of the implementation steps through to the final steps and their intended outcomes can be derived by using program logic.

Program logic was an early factor in evaluation theory. Program logic establishes a visual understanding for all parties of the shared assumptions about policy impacts and its use to design reform implementation enables the subsequent changes of a reform to be managed. Program logic can be seen as a forerunner of the flow diagram in implementation theory and is the basis of the road map for planning each step to the intended outcomes of performance management (Hatry, 2013; Poland, 1974; Sabatier & Mazmanian, 1980). The use of program logic also enables the actual reform outcomes to be attributed to those changes, which is an important element in evaluating the effectiveness of management reform and enabling an eventual outcome to be ascribed to a particular factor of change (Funnell, 2000; Mayne, 2001; Pollitt, 1995; Rogers, 2008). The conclusions of attribution and reform effectiveness are important results from the implementation road map derived from initially using program logic. Program logic bridges the gap between initiating a reform and evaluating an effective outcome. It has entered limited Australian implementation research (Baehler, 2007; Funnell & Rogers, 2011; Head & Alford, 2015; Ryan, 2004; Wanna, (Ed), 2007). Its use elsewhere has also been limited, such as its absence in a look-back at thirty years of implementing public sector management reform in twelve countries, including Australia (Pollitt & Bouckaert, 2011). That review only summarised those reforms and did not evaluate their long-term outcomes.

That look-back drew upon a great deal of accumulated research about reform implementation. Pollitt & Bouckaert’s review of implementing management reform in thirteen countries, however, proposed only a tentative model of public management reform. Admitted as being highly-simplified, this model consisted of only three stages: reform contents; implementation processes; results achieved (Pollitt & Bouckaert, 2011: p.33). Their model did not include any factors of active change management over extended time and there were no connections designed between senior management and affected staff. This demonstrated both a gap in current implementation research and the value of drawing upon additional fields to enhance implementation theory.

The focus of reform implementation is changing, from commencement to establishing impact. This change is occurring in APS practice as demonstrated in the following three examples: in Page | 83

the centre’s intention to embed APS values (APSC, 2013a); ensuring an evaluation culture in single Departments (Industry, 2015; Infrastructure, 2016; Mrdak, 2015; Southern, 2014) and across the APS in applying the PGPA Act’s non-financial performance requirements (Morton & Cook, 2018). Research is now examining reform beyond this initial implementation in four ways. These are asking “Is Implementation Only about Policy Execution” (Lindquist & Wanna, 2015); seeking to embed management systems to improve success (Newcomer & Caudle, 2011); establishing reform impacts and success (McConnell; 2010; Pollitt & Dan, 2013); to the definite language of embeddedness in ‘making reform stick’ (Lindquist & Wanna, 2011; Ilott et al., 2016; Pal & Clark, 2015). Evaluation and extended time are being linked, in frameworks of reform impacts and whether they stuck. The relevance of extended time as an implementation factor can be derived from the main research on time to date, which follows.

4.9 Perspective of Extended Time for Effective Reform Impact In any conclusions about effective reform impact, implementation theory currently has varied meanings of time. The six steps of Pettigrew’s model of effective change (1990) may enhance a more-systematic implementation theory, by challenging researchers to (1) study change and its initial substance in the (2) context of linked levels of analysis, (3) review the implementation of those changes (4) over extended time and then later (5) analyse the impacts of those changes, which may (6) differ from the original objectives. Pettigrew’s framework does not, however, contain the specific timing factors of short, medium and long-term of evaluation’s program logic, which become relevant to maintaining reform momentum (Bickman, 1987; Leithwood & Montgomery, 1980; Morrison, 2014; Scheirer, 1987). The key factors are extended time and evaluating achievement, which have been absent in existing implementation frameworks.

Existing implementation theory may be enhanced by the following six factors of Pettigrew’s framework. The first is common to much implementation theory: its content, being the change or reform under review. The second concerns the many organisational levels of reform. The third is the processes of implementing and later managing that change/reform. The fourth is managing those processes over an extended period of time. Finally, evaluating those processes is in two parts: (5) explaining the actual achievements, but (6) comparing them against the original objectives. Those six factors of Pettigrew’s framework have two key features.

First is the emphasis that extended time is needed to assess management reform outcomes. Examples of such time from implementation theory include ten to thirty years before assessing Page | 84

those outcomes (Bovaird & Russell, 2007; Hood & Dixon, 2016; Sabatier, 1986), or an undefined “short, medium and long-term” (Marsh & McConnell, 2010: p.580). Second, the framework emphasises comparing the actual outcomes against the original intentions of that change, although the methodology of this comparison was not specified. This is otherwise the evaluation discipline’s assessment of effectiveness (Boston, 2000; Rogers, 2008; Wholey, 1987; Wolman, 1981). The potential effectiveness of a single management reform can also be eroded in the disruptive turn-over of the short-term reform series (Hood & Lodge, 2007; Pollitt, 2013; Wettenhall, 2013). Consequently, Pettigrew’s factor of extended time contributes to evaluating whether management reform became embedded in permanent practice.

Assessing embedded management reform requires a comparison of actual outcomes against those intended. The intervening levels of implementation and their interpretation by different agents of change is the value added in Pettigrew’s framework. This framework also recognises the time between initiating reform changes and analysing their outcomes, or their eventual long-term impacts; the two (outcomes and impact) not being the same. This was demonstrated in the difference of five years, between the outcome claimed in 1992 of the APS practising the MfR culture and its impact disappearing by 1997 (Mackay, 2011; TFMI, 1992). While examining the results from implementing reforms has become a research priority (Funnell & Rogers, 2011; Hupe & Saetren, 2015; Jones & Kettl, 2003; O’Toole, 2004; Saetren, 2005), a more-general theory of implementing embedded reform remains to be developed. The limitations of current implementation research are explored in the next paragraphs about three other theories. These theories are change management, evaluation and organisation research.

4.10 Relevance of Three Existing Theories to Implementation. Implementation theories have focussed on the difficulties of implementing public sector reforms. Research has commonly focussed on a range of negatives. These have included: its varying paradigms, principles, paradoxes and pendulums (Aucoin, 1990); the difficulties in evaluating reform changes (Boston, 2000); the need to take a wider, whole-of-government approach (Christensen & Laegreid, 2007); being alert to unintended reform consequences (Ghobadian et al., 2009); plus the explicit negatives of ‘Why Reforms so Often Disappoint’ (Aberbch & Christensen, 2014) and “Leading Change. Why Transformation Explanations Fail” (Hughes, 2016). Implementation has otherwise been regarded as a process of commencing reform changes (Hupe & Hill, 2016; Ives, 1994; MAB, 1993; TFMI, 1992). Recent analysis has concluded implementation research requires studying beyond such negative implications Page | 85

and to develop further research paradigms, potentially by connecting performance management with evaluation (Hupe, et al., 2014; Kroll & Moynihan, 2017; Saetren, 2014). With the limited reference to evaluation theory26 by Kroll & Moynihan, this was a late acknowledgement of the significance of evaluation in the performance of reforms.

To a limited extent, evaluation has been factored into past implementation analysis. Examples included the importance of later evaluating reform outcomes and of receiving interim inputs, through feed-back loops operating over extended time-frames (Sabatier, 1986; Sabatier & Mazmanian, 1979). These time-frames have varied in their length, with three to five years being considered too short for establishing outcomes and longer assessment periods being suggested, between ten to twenty years or thirty years (Craft & Halligan, 2017; Sabatier & Mazmanian, 1979). These varying times complement Pettigrew’s factor of extended time and develop a key theme of this research: evaluating the demonstrable results of management reform in the long term. There has been no agreement on the optimal time-frames for evaluating those impacts, despite the recognised importance of institutionalising those changes (Bovaird & Russell, 2007; Fernandez & Rainey, 2006; Kotter, 1995; Newcomer & Caudle, 2011). When viewed over extended time, ‘impact’ need not equal ‘institutionalisation’, as exemplified in the coming and going of APS program evaluation between 1984 and 1996 (Mackay, 2011; TFMI, 1992). O’Flynn (2015) has emphasised the lack of robust models for understanding public sector reform. There has been little research conducted into the long-term effectiveness of management reforms over extended time that allow for evaluating the effectiveness and the institutionalising of reform changes. This also highlights the significant factor of extended time in both theories of implementing reform and change management.

Change management theory and the factor of extended time are not necessarily integrated. A recent application of a time lens to strategic change research found that “relatively few studies have adopted a fine-grained longitudinal approach” (Kunisch et al., 2017: p.1006). Current change management research has been largely derived from the private sector, consistent with the impact of Kotter’s work (Müller & Kunisch, 2018; Pick & Teo, 2017; Pollack & Pollack, 2015). With implementation being re-conceptualised as change management (Shannon, 2017), current strategic change research becomes relevant for public sector reform and raises an expanded research agenda. This could include the following factors identified to date: the

26 Of the forty-three references used by Kroll & Moynihan (2017), five were from evaluation (twelve per cent). Page | 86

lengthy implementation time periods discussed, redressing the reform ‘syndrome’ or ‘industry’ and its concomitant: reform fatigue (Hood & Lodge, 2007; O’Flynn, 2015; Radin, 2000; Wettenhall, 2013). Other factors include involving all personnel affected by that change, especially when they are physically at a distance from the reform and management centre, plus assessing the outcomes and effectiveness of those reform changes (Alderman, 2015; Barrett, 2014; Beer, 2009; Bovaird, 2014; Craft & Halligan, 2017; Funnell, 2000; Matheson, 2016, 2017; TFMI, 1992). The implementation differences recognised between the public and private sectors (Broadbent, 2013; Barker et al., 2018; Knies & Leisink, 2018) suggest that effective change management in the public sector remains a work in progress.

This includes initially designing public sector management reforms for their later effective implementation. The initial design of management reform for its later evaluability and effectiveness can be linked, by evaluation’s program logic (Funnell & Rogers, 2011; Hunter & Nielsen, 2013; Rogers, 2008). Currently however, program logic does not include an initial change leader or a subsequent change manager (figure 7, chapter 3). A transformational change agent is a reform leader able to step outside the traditional frameworks of operations and transform them (Battilana et al., 2010; By, 2005; Stewart & Kringas, 2003). The successful implementation of such reforms, however, requires both leadership and change management involving middle managers (Buick et al., 2018; Kuipers et al., 2014; Rouleau & Balogun, 2011; van der Voet, 2014). There is thus an additional function of those change agents who maintain the momentum of reform change (Barker et al., 2018). A potential enhancement of a change management framework by ‘embedding’ is in a model performance management framework for “Embedding Practices for Improved Success” (Newcomer & Caudle, 2011). Their framework mirrors Pettigrew’s ‘longitudinal’, by the factors of program/reform performance and embedding success, that link change leadership with effective, long-term outcomes.

Evaluating the effectiveness of reform outcomes has been absent in implementation theory. The evaluation of models of public management reform has been relatively modest, which was identified early in the public sector reform series, but which has not been developed into a framework of long-term and effective reform (Boston, 2000; Breidahl, et al., 2017). A limited change management framework from Australian public sector practice has come from the separate domain of evaluation (Hanwright & Makinson, 2008). This framework featured a design flaw relative to Pettigrew’s context and extended time-frames, because the research had been conducted over a short timeframe of three years (2005-2008) in claiming implementation Page | 87

of an evaluation culture in a Queensland state department. It is notable that this was developed in a state jurisdiction and has not featured in research on federal performance or reform of the APS27. Federally, the only evaluation of APS reform was conducted several decades ago (TFMI, 1992). It was, however, qualified that further work was required to embed its findings in routine practice by every program manager, especially by the local managers in the APS distributed regions (MAB, 1993). No follow-up of such embedding was undertaken and the requirement for program evaluation was abolished some years later in 1996 (Mackay, 2011). Although Fernandez & Rainey (2006) focussed on success in organisational change and institutionalising such change, their framework lacked the factor of evaluation to demonstrate this success.

Formal evaluation of the outcomes from public sector management reform has been rare. This was especially so regarding any successful changes from such reform, (Boston, 2000; Breidahl et al., 2017; O’Flynn, 2015; Skinner, 2004). The reasons for that rarity have not been researched. Regular attempts at reviewing reform have been attempted (Hood & Dixon, 2015; Moynihan, 2006), though mainly by Pollitt alone or with collaborators (Pollitt, 1995; Pollitt, 2001; Pollitt, 2013; Pollitt & Bouckaert, 2003; Pollitt & Bouckaert, 2011; Pollitt & Dan, 2013). Pollitt & Bouckaert’s attempted synthesis of these evaluations resulted in a model of public management reform limited to three stages: “contents of reform package; implementation processes; results achieved” (Pollitt & Bouckaert, 2011: p.33). Although inferring the ‘results achieved’ would be assessed through evaluation, their model was limited by not including active change management over extended time and was without connections between senior management and affected staff. These are gaps reviewed in this chapter.

Gaps in implementation research were demonstrated by Pollitt & Bouckaert’s model. These gaps have been partially addressed, by melding leaders as long-term change agents who have skills in organisational change and evaluation (Battilana et al., 2010; Rusaw, 2007). In APS management, the practice of evaluation had originally been considered an important management capability, although it had definite commencement and ending dates of 1984-1996 (Mackay, 1992; Mackay,1994; Mackay, 2011). The subsequent demise of this requirement for demonstrating implementation effectiveness by evaluation went largely unrecognised, except

27 There were 6 references in Google Scholar (12/8/19), with two being the Australian Government’s Australian Institute of Family Studies. Page | 88

by its former APS senior manager and almost incidentally in research (Halligan, 2003; Hawke, 2012; Mackay, 2011). After an absence of twenty years as an APS central priority, evaluation is again an adjunct for demonstrating non-financial performance in implementing the PGPA Act (Morton & Cook, 2018). These gaps in time illustrate both the lack of long-term research into testing the claims of implementation and the relevance of Pettigrew’s paradigm of extended time. These additional theories enhance implementation theory with the following factors.

4.11 Linking those Three Factors Outside Implementation Factors of change management, extended time and evaluation contribute to enhancing current implementation theory, thought these factors are yet to be connected. The evaluations of reform outcomes have been routinely recommended, but appear to have been afterthoughts and the evaluations have not generally been undertaken (Dawkins, 1985; Fernandez & Rainey, 2006; MAB, 1993). Such reluctance can be due to the initiating management placing no value on lengthy evaluation, defensively deciding that evaluation was a high-risk activity and assuming the reform changes would inevitably lead to benefits (Skinner, 2004). This forms part of the background to this thesis’s examination of the abilities of an APS Secretary to implement management reform by diktat that is intended to ensure embedded changes in APS practice. The framework for connecting such a diktat and later reform results is yet to be established.

Elsewhere there has been some further but incomplete development of a general reform theory. This has focussed on understanding the results from implementing reforms (Jones & Kettl, 2003; O’Toole, 2004, Saetren, 2005). These developments omitted links between initiation, implementation and outcomes. While Pettigrew’s framework of context and time links “the content, contexts, and processes of change over time to explain the differential achievement of change objectives” (Pettigrew, 1990: p.268), it omits reference to the connected program logic of evaluation. Pettigrew’s factor of extended time does bridge the existing temporal gap between the processes of initiating the discrete changes of reform and subsequently seeking out and analysing their later outcomes. O’Flynn (2015) has, however, concluded that development of any conceptual model for understanding public sector reform remains incomplete.

This incompleteness is reflected in the individual theories discussed here. This chapter has linked implementation in ways not yet forming a single comprehensive framework. Recently, it has been suggested that policy implementation studies can learn from organisation theory (Bozeman, 2013). This was supported by the complementary conclusion from a review of Page | 89

public sector change management research, which identified a theory-practice gap “between the world of the change practitioner and that of those studying change as outsiders” (Kuipers et al., 2014: p.16). There should be greater emphasis on studying the outcomes of successful change in practice. That conclusion supports this current research into the frameworks for implementing APS reform that demonstrably results in embedded, long-term change outcomes.

Embedding is a desirable outcome of implementation. No framework for ensuring this embedding was included in the recent question “Is Implementation Only about Policy Execution” (Lindquist & Wanna, 2015). Alternatively from evaluation theory, program logic could bridge gaps in continuity and timing: between initiating public sector management reform and its outcome. Program logic has been proposed in implementation theory, to address public sector problems with long lead times (Baehler, 2007; Head, 2008a; Head & Alford, 2015; Ryan, 2004). However, there are also gaps in management perceptions between policy and implementation (Halligan, 2007), where implementation is regarded as only “a presupposed residual in goal achievement” (Hupe & Hill, 2016: p.118). This is also consistent with the discussion above about the lack of evaluating whether that implementation become embedded and highlights the significance of management attention being required over the long-term.

By contrast, long-term management attention was absent in the MfR reform of program evaluation. In central priority and APS practice, evaluation lasted about ten years, where continuity of the senior managers had been a key factor in sustaining that reform (Halligan, 2018; Mackay, 2011). This lack of attention to the later and long-term outcomes of management reform can be observed in Australian implementation research, which has been published shortly after the announcement of a particular reform (e.g. Keating, 1990; Lindquist, 2010; Weller, 1993). Apart from the practice-based evaluation of the MfR reforms, little research has subsequently been undertaken on the APS series of reform(s) and their success or failure (O’Flynn, 2015; TFMI, 1992; Wettenhall, 2013). This research addresses Pettigrew’s long-standing concerns (Pettigrew, 1990; Pettigrew et al., 2001) about the narrowness of research which is based on a snap-shot, descriptive analysis of a single reform event.

4.12 Conclusions about Enhancing Implementation Theory. Currently, implementation research has focussed on a single public sector reform event. The factor of ‘embedding’ is a recent contribution to both implementation research frameworks and APS practice, progressing into a somewhat-exasperated desire to make reform ‘stick’ (APSC, Page | 90

2013a; Ilott et al., 2016; Lindquist & Wanna, 2011; Newcomer & Caudle, 2011; Pal & Clark, 2015). By contrast, Kuipers et al., (2014) concluded more research is required on the outcomes of successful change in public sector practice. A theory-practice gap remains in developing frameworks of implementing public sector reforms that are demonstrably embedded.

Part of this gap stems from the under-development of program logic in implementation theory. In evaluating outcomes of reform changes, this program logic has featured in three other fields: program performance (Bickman, 1987; McLaughlin & Jordan, 1999); evaluation and the priority of evaluability: the testing of public sector systems’ readiness for review and ability to be evaluated (Baehler, 2003; Hatry, 2013; Wholey, 1987); or both (Rogers, 2008; Scheirer, 2012). Partially, program logic has entered some public administration research (Baehler, 2007; Head & Alford, 2015; Isett et al., 2016). This has not lead to a framework linking implementation with the intention of later evaluating the outcomes of that implementation.

The connection between policy implementation and evaluation represents a work in progress. Linked examples have included Baehler (2007), Boston (2000), De Groff & Cargo (2009) and Laughlin & Broadbent (1996), which included linking performance management with program evaluation, as they were complementary (Hatry, 2013). Despite these calls, there is an absence of comprehensive implementation frameworks making those connections. As reviewed in this chapter, theories of implementation, extended time, change management and evaluation may be useful in examining the abilities of an APS Secretary to command and implement an effective reform throughout her/his responsibilities. There is, consequently, an absence of integrated evidence on the effectiveness over extended time of these chief executive abilities.

This absence of evidence may stem from the limited fields from which research into implementation of public sector management reform draws. An example was a recent review of implementation research, which drew upon traditional but limited fields of public policy, political science, public administration and public management (Saetren, 2014). This limitation meant the review did not include the separate frameworks of organisation research, change management or evaluation that have been reviewed in this study. While concluding research in his chosen discipline of policy implementation had reached a mature stage, Saetren (2014) indicated still further research was needed. Saetren suggested that this would research the links between theory and practice, over extended periods of time such as five to ten years. Page | 91

By interposing a definite period of time, this implicitly acknowledged Pettigrew’s framework of extended time that is yet to be factored into implementation theory.

The framework of implementing embedded public sector management reform is yet to be established. The connections between commencement, implementation and assessing the eventual outcomes of management reform have not been researched, although the ineffectiveness of past APS reform has been identified (Lindquist, 2010). This has not been linked systematically with evaluating either the context or outcomes of those past APS reforms, resulting in the conclusion “we are poor at evaluating reform both theoretically and practically” (O’Flynn, 2015: p.19). Except for the MfR Reform (TFMI, 1992), evaluations over extended time of APS reform have not been undertaken. This study contributes new knowledge to that lack of look-back and potentially enhances current public sector implementation frameworks.

Those current frameworks of implementation theory inform much policy research. However, some more-reflective research has concluded implementation theory could be better informed by public sector practice (Broadbent, 2017; Head, 2015; Kernaghan, 2009; O’Toole, 2004; Radin, 2013). Those current implementation frameworks appear insufficient to be able to assess the long-term impact of management reform that demonstrably results in effective change, by linking the initiation of a reform with the leadership of its long-term outcomes and embedding those outcomes. That short-term leader of management reform is a key factor in change, which contrasts with the continuity of that reform leader being critical to maintaining reform momentum (Armenakis et al., 2000; Haligan, 2018; Kamener, 2017; Stewart & Kringas, 2003). Leadership research has failed to consider the sustainability of public sector management reform when that initial leader has moved on (e.g. by promotion, retirement or re- assignment). For example, changes at the top in 1996, in both government and Finance Secretary, were instrumental in program evaluation no longer being mandated in the APS (Mackay, 2011). Implementation theory lacks a longitudinal factor that includes evaluating the long-term management of the embedded reform outcomes.

Past evidence has identified the importance of evaluating reform impacts over an extended time frame of several decades. Examples have varied, but included ten to twenty years, thirty years and forty years (Craft & Halligan, 2017; Pollitt, 2013; Sabatier & Mazmanian, 1979). By contrast, change management models (e.g. Kotter, 1995) have omitted extended time, in relation to the impact of designated change agents on implementation. The last element of Page | 92

institutionalise reform changes is a temporal factor in Kotter’s model from the private sector that lacks definition and application within the public sector. It becomes relevant to the time required for continuing management reform that has demonstrable impacts on the distributed staff of the APS. Demonstrating the outcomes and those long-term impacts is more associated with evaluation theory.

The framework for planning implementation has lacked evaluation of the long-term outcomes. In practice, evaluating the results from thirty years of these public sector reforms has been inconclusive (Pollitt & Bouckaert, 2011). However, there are three difficulties in the practice of implementation: actual management reform has been more about its initiation than achieving long-term outcomes that stick; the means of maintaining reform momentum by long-term change managers; the missing framework that is necessary for defining and successfully attributing the results of management reform. A rare example of the evaluator’s dilemma of attributing actual outcomes to implementation initiatives in performance assessment concluded that such cause-and-effect analysis “has often been slipshod and inappropriate” (Bovaird, 2014: p.19). This was a late development in implementation theory and once again drew attention to an absence: how to link the initiation of management reform with any eventual outcome, especially one that became embedded.

Linkages are beginning to be made, although from the evaluation discipline. These links are between leadership and performance management, as evaluation can become a management adjunct to evidence-based policy (Bourgeois, 2016). This addresses long-standing concerns that have otherwise treated evidence-based policy as a separate research theme (Head, 2008b; Head, 2016; Sanderson, 2002). The following developments in other disciplines are also relevant to enhancing implementation theory. These include public sector leadership, change management, the organisation research of Pettigrew’s context of extended time, plus the evaluation framework of program logic that links good, initial program design with the intended outcomes, as shared knowledge agreed by all participants. The first three disciplines are implicit in the inclusion of the latter two in this framework, to re-assess existing implementation theory.

The above omissions highlight the need for further research to extend those implementation models beyond their traditional foci and are more indicative of the next steps in implementation research and development of its paradigms. Although not drawing upon the alternative research Page | 93

disciplines discussed here, there have been recent calls for these next steps, especially to traverse the wider gap between policy success and failure (Hupe & Saetren, 2015; McConnell, 2015). McConnell’s call unfortunately created a false dichotomy of either/or (success/failure) and blurred some of the lineal and temporal continuities of policy, reform, implementation and long-term policy outcomes. The lack of comprehensive approaches in implementation research has been longstanding (Matland, 1995). Existing implementation theory can be characterised as drawing upon somewhat-traditional sources in the broad discipline of public administration.

This chapter has summarised three other disciplines of change management, organisation research and evaluation theory that have been shown potentially to contribute to implementation theories of embedded reform. In this sense, combining multiple theories in public policy is a developing research interest, by integrating reform implementation, performance management and program evaluation (Andrews & Esteve, 2015; Cairney, 2013; Kroll & Moynihan, 2017). In reform implementation, the additional factors being recognised are readiness to change (Buick et al., 2015) and “realistic timeframes for anchoring reforms” (Lindquist & Wanna, 2015: p.231). In addressing the concerns of O’Flynn (2015) about evaluating reform and identifying the achievements of policy or administrative reform, the alternative field of evaluation regularly asks a question that is rare in the implementation field: are we there yet? That question contains an important implicit supposition about long-term management attention to reform.

Undertaking evaluation presupposes that there are continuing interests by successive management in the long-term outcomes or evidence from initiating reform. As evaluation provides evidence of longer-term outcomes more generally, it also presupposes a public sector learning environment that values evidence-based policy (Head, 2008b; Head, 2016; Head & Alford, 2015). Especially in establishing those long-term outcomes, these developments highlight the value of Pettigrew’s framework of extended time, by incorporating the factor of extended time and the effective management of those reform changes so they become embedded in agency practice. A conclusion from this chapter is the need to re-frame implementation theory more widely than is currently in the literature, for the following reasons.

Existing implementation models give only limited consideration to embedding policy or public sector management reform. Two factors which have been overlooked are extended time from organisation research and program logic from evaluation theory. Further consideration of Page | 94

embedding reform will draw upon change management, organisation research and evaluation. This chapter identified features of extended time in the evaluation discipline that may extend current frameworks of reform in the implementation literature, by including the evaluation of long-term changes and outcomes. Specifically, the chapter has considered whether integrating active evaluation into theories of implementation could result in an enhanced theory of reform effectiveness. Those main theories will form the following framework.

4.13 Framework of this Research The main research focus is whether this management reform of program evaluation become embedded in the APS? This is complemented by three further questions: what is the role of public sector change agents in embedding APS management reform? How can change management frameworks explain the challenges of implementing APS reform policies? What insights might be learnt from cross-disciplinary research integrating reform, change management, the factor of extended time and the evaluation of enduring reform impact? This chapter has concluded that existing implementation theory is insufficient to answer the main theoretical consideration of this thesis, relating to the role of change agents in embedding management reform in the APS. By contrast, this chapter has shown that assessing current research on implementing public sector management reform has revealed six repetitive factors.

These six factors are not currently linked in a single framework. They are: the need for such management reform is only asserted, not demonstrated (e.g. AGRAGA, 2010); the importance of top-down leadership in implementing reform; the outcomes of management reform are generally unassessed; potential impact(s) are not designed at the commencement of reform; reform success being demonstrated; assessing later reform effectiveness. The absence in step one of any base-line information results in difficulties of assessing reform effectiveness. Steps four, five and six are not necessarily the same, as this chapter has shown that reform ‘success’ being asserted does not equal demonstrating long-term reform effectiveness. The repetition of these themes has been called a reform industry (Rhodes, 2016; Wettenhall, 2013) and the absence of comprehensive frameworks has been noted (O’Flynn, 2015). The reasons for these re-occurring reforms have not, however, been examined in theory or practice. Efforts are being made to bridge an identified gap between what the public sector does and what is researched, both in Australia and internationally (Buick et al., 2016; Head, 2015; Isett et al., 2016; Peters & Pierre, 2017). A model of embedded structural public sector reform remains to be developed, which is relevant to the following current priority in APS performance and reform practice. Page | 95

A current and on-going priority for the APS is implementing the Public Governance, Performance and Accountability Act 2013 and embedding the requirements of this Act. Actual APS practice is expected to mature over time (ANAO, 2017b). This conclusion from the accountability centre identifies the significance of extended time in this research. After a relatively-short implementation period of four years, its implementation (but not its effectiveness) has been reviewed (Alexander & Thodey, 2018). Their first recommendation related to driving change through leadership, as led by the APS Secretaries Board (APSC, 2014a) and periodically reviewed by Parliament’s Joint Committee of Public Accounts and Audit (JCPAA, 2017). These illustrate two factors of this chapter: the top-down impact of Secretaries which is needed to maintain reform momentum, plus undertaking later evaluation of actual implementation practice and impact, in this case by a body external to the APS.

This thesis emphasises the need to measure management reform success. Examples of those meanings of success emphasising the long-term have included ten years, twenty years to thirty years (Bovaird & Russell, 2007; Craft & Halligan, 2017; Kotter, 1995). Common meanings remain to be developed and underlying these varying definitions is the repeated reform series and the lack of institutional memory (Barrett, 2014; Hood & Lodge, 2007; Pollitt, 2013; Tingle, 2015; Tiernan, 2016; Wettenhall, 2011). This suggests that failure to embed management reform results in re-inventing the same management and structural changes by subsequent generations of top-down reform agents and public sector staff.

In the APS, there has been a reform gap of twenty years, between the requirement to demonstrate performance by evaluation being abandoned in 1997 and being re-recommended by the Finance Department in 2018 (Mackay, 2011; Morton & Cook, 2018). That twenty-year gap can be compared with the average length of service by APS staff at 30 June 2018: sixty- four per cent have a length of service shorter than twenty years and only six per cent have been in longer than thirty years (APSC, 2018d). That may contribute to corporate amnesia and the repeated reform series, because “there is no-one to learn from” (Henry, 2015). This chapter suggests that ‘extended time’ in the APS means between twenty and thirty years. Key findings from this chapter about implementing management reform are that embedded reform requires commonly understood performance and management information systems, to enable central management assessment of the impacts sought by the Commonwealth government. A key implementation outcome is effective reform.

Page | 96

An implementation model emphasising effectiveness, outcomes and embedded impact is currently lacking. This research seeks to contribute to this model by adapting elements of management reform, change management and its agents, extended time and evaluation. This planned framework is based on the following four factors: the change and logic models of purposeful program theory (Funnell & Rogers, 2011); asking did the reform achieve the intended outcomes, over what extended time-span? (Marsh & McConnell, 2010); the extended timing inherent in longitudinal (Pettigrew, 1990); using programme theory to evaluate the complicated and complex aspects of reform interventions (Rogers, 2008). Their integrated elements may assist in enhancing implementation theory and answering whether change management theory can guide enduring public sector reform.

Whether APS management reform endures and becomes embedded cannot be shown in theory or practice. This chapter has revealed the contrasts between existing theory and past APS practice in implementing management reform. It has begun to highlight the cross-disciplinary factors in implementing management reform that may or may not result in permanent changes of practice. The factors which have been identified as having most impact are: steering management reform beyond its commencement; the relationship between reforming a large and dispersed organisation such as the APS and other actors throughout the APS; the length of time required to effect change that is both permanent and effective. This highlights the biggest challenges are maintaining the momentum of implementation and evaluating any later outcomes for their impacts. These challenges form the background to the four questions at the beginning of this chapter.

Chapter 5 sets out the methodology addressing those four question. The factors discussed above will be examined in a single case study involving two pilot studies and thirty-two interviews. It is intended to draw upon expert commentary and practitioner analysis of APS reform and implementation, by central APS agents of accountability, together with reflections by APS practitioners active and retired, and other non-APS but interested parties. This research has been complemented by documentary analysis of research in three disciplines of implementation, change management and evaluation, together with practice-based accountability literature by such agencies as Parliamentary Committees and the Australian National Audit Office. This qualitative research complements and enhances current paradigms in implementation and change theories.

Page | 97

Chapter 5 Methodology The case study relies on many of the same techniques as a history, but it adds two sources of evidence not usually available as part of the historian’s repertoire: direct observation of the events being studied and interviews of the persons involved in the event (Yin, 2014: p.12).

5.1 Introduction Chapter four outlined the current lack of a framework for implementing public sector management reform that leads to its effectiveness, results and long-term, embedded impact. This research sought to determine what elements could contribute to such a framework. This chapter sets out the research objectives, defines the parameters of the case study, establishes the rationale for this study, sets out the interview methodology and the triangulation with documentary sources. It describes the data analysis procedures then summarises the trustworthiness of this research. This research examines the variety and quality of personal experiences with APS management reform, obtaining details that would be “difficult to extract or learn about through more conventional research methods” (Strauss & Corbin, 1990: p.11). Through purposive and snowball sampling, this qualitative research draws on APS staff at management and operational levels, plus the experiences of their organisations with management reform and was conducted with two research objectives.

5.2 Research Objectives The first is to review the potential contribution of change management in the public sector, as a factor in the present frameworks of management reform implementation. The second is to examine if the changes started by reform can endure long enough to become embedded. In support of these objectives, there are the following four research questions. Whether this management reform of program evaluation became embedded in the APS? What is the role of public sector change agents in embedding APS management reform? How can change management frameworks explain the challenges of implementing APS reform policies? What insights might be learnt from applying a lens of extended time to implementation theory and to examining how reforms endure?

There has been limited research evaluating the enduring outcomes of implementing reform, in both the APS and the public sector more generally (Briedahl et al.,2017; O’Flynn, 2015). This research was designed to discover insights then interpret reasons for a management reform having been commenced, considered implemented, but then no longer being mandatory and Page | 98

not becoming embedded in permanent APS practice (Noor, 2008). Program evaluation is again a reform factor in the APS non-financial performance (Morton & Cook, 2018). The similarities to the earlier management reform provide context to this study’s use of ‘longitudinal’, long term and ‘extended time’. The gap between 1984 and 1996 establishes the central question of this research: whether this management reform of program evaluation become embedded in the APS. Rationale for the choices of qualitative research in a single case study follow.

5.3 The Case Study The unit of analysis is a single case study of program evaluation in the APS management reform known as ‘Managing for Results’ and may produce a novel comprehension of implementation theory (Strauss & Corbin, 1990). From the seventeen APS management reforms between 1976 and 2013 (Figure 1, chapter 2), program evaluation was chosen because it had clear dates of implementation (1984) and finalisation (1996), establishing distinct case boundaries to this study. As such, the absence of program evaluation after 1996 (Mackay, 2011) and the management terminology of outputs and outcomes (Halligan, 2007) has not been studied in depth and is not the direct priority of this research. This period of twelve years contrasts with other ranges of fifteen years to forty years for establishing reform outcomes (Boston, 2000; Pollitt, 2013). These ranges provide a context of extended time to the outcomes of APS management reforms, as discussed on pages 3, 11 and 37.

This case study draws upon researcher and practitioner analysis of APS management reform. These were central agents of APS accountability, reflections by active and retired APS practitioners, plus non-APS but interested parties. Their responses were analysed with NVivo. The interviews were complemented by documentary analysis of research in four fields of implementation, change management, organisation research and evaluation, plus other accountability literature by such agencies as Parliamentary Committees and the Australian National Audit Office. .

5.4 Rationale for Qualitative Single Case Study The choice of qualitative research is outlined first and will be followed by the rationale for a single case study. Qualitative studies are the sources of rich bases of description and provide “interplay between researchers and data” (Strauss & Corbin, 1990: p.13). Such studies can lead to possible explanations of local phenomena and their flow over time, by producing conclusions from observation and documentary analysis rather than being statistically based (Miles & Page | 99

Huberman, 1984; Strauss & Corbin, 1990). Qualitative research has become increasingly popular, but needs analysis and interpretation of patterns in a set of categories (Attride-Stirling, 2001; Mishler, 1990). Qualitative research needs to be credible, transferable, dependable and confirmable (Guba, 1981). Figure 8 sets out the application of those four criteria in this thesis. Figure 8 Four Criteria of Guba As Applied in this Thesis Credible: prolonged engagement; persistent observation; Pilot interviews: section 5.7. collect referential adequacy materials. Triangulation: section 5.10 Transferable: Interviewees were selected by snowball Successive interview subjects were selected by asking sampling, until saturation and no new each respondent to nominate someone whose point of view information was generated. Section 5.8. is as different as possible from his or her own. Interviews were conducted at multiple Collect thick descriptive data. levels of APS managers and staff. Dependable: two or more methods are teamed in such a way that the weakness of one is compensated by the See following “triangulation”. strengths of another.

Confirmable: (1) Triangulation, with data from a variety of perspectives, (1) Eight sources of other information using a variety of methods, drawing upon various sources. were used: see Section 5.10. (2) Practising reflexivity: discuss the inquirer and (2) Potential researcher bias is document shifts and changes in his or her orientation. considered in section 5.6.

Sources: Guba (1981: p.p.80-87); Morse (2015).

This thesis intends to make a contribution to theory, rather than statistically-based conclusions (Baskarada, 2014). Accordingly, it adopted the qualitative approach, to compare empirical results against previous implementation theory.

The case study method of this research adds further understanding to theories of implementation and management reform. This is especially so when the boundaries of the study are known and it can be used to understand more detail than may be casually obvious (Eisenhardt, 1989; Stake, 1978; Tellis, 1997). An important aspect of understanding this detail is that the researcher is part of the study (Stenbacka, 2001). A single case study can add new information about established theory, because a case study provides two valuable sources of evidence: direct observation or reporting of the subject events, plus interviews with key agents and others involved in the event(s) under study (Rowley, 2002; Yin, 2014). Yet the researcher can potentially be conflicted, by being both an insider understanding the subject under review and an objective outsider as the researcher.

This potential personal conflict is longstanding (e.g. Deutsch, 1981). As more fully discussed in section 5.6, this method raises issues of my status as a former insider of the APS and an Page | 100

outsider as a new researcher. My four decades of APS employment provided me with the following insider advantages: extensive periods of administrative and policy employment in programs of three APS Departments (both prior to the MfR reform, during it and afterwards), combined with both policy and project work in three additional central agencies. Such policy and program experience proved to be advantageous in my analysis of the APS reform practice and literature. My insider background provided several additional advantages.

These advantages were access to staff and understanding the APS culture. Personal knowledge facilitated access to current and former staff of the APS through my prior relationships, plus my understanding of the APS culture under review enhanced rapport and communication with them (Hockey, 1993). Those positives can be counterbalanced by negatives of the stranger going native, by becoming too familiar with the participants and taking for granted their underlying experiences (Hockey, 1993). There may be a danger of this insider world view acting to filter the interviewees’ responses (Chavez, 2008; Hockey, 1993). Ready identification of key interviewees was derived from my professional experience of the public sector reform series (Barrett, 2014; Holmes & Shand, 1995; Wettenhall, 2013). On balance, this insider background was advantageous.

A researcher can be both an insider and outsider. There is no need for an either/or research environment, especially as there has been no common distinction between an insider and outsider (Gair, 2012; Hellawell, 2006). An alternative viewpoint might consider I had been only a partial insider, as I had not been at those senior management levels (e.g. Secretary or Deputy Secretary) associated with implementing top-down management reform. My lengthy memberships of IPAA (27 years) and AES (13 years) provided me with alternative professional acculturations and perspectives on the APS that underpin this research. My background was disclosed in advance to participants, including my policy participation in some of those reforms. These diverse policy exposures in APS practice served to widen my research perspectives.

Those perspectives are reflected in the four fields of this thesis: public sector management reform implementation; change management; organisation research and evaluation. The frames of reference of those fields underpinned the following seven categories of interviewees, who were mixtures of former and current APS staff, independent auditors, professional researchers, external consultants and former government Ministers. These interviewees included (1) former APS staff involved in the 1980s reforms, (with a mix of levels, from Secretaries and senior Page | 101

management down); (2) current Australian academics with published research on APS management reforms or evaluation; (3) former senior staff of the Australian National Audit Office; (4) professional bodies such as the AES; (5) external consultants; (6) current APS staff involved in the PGPA Act reforms, from senior management down; (7) former Australian Government Ministers. This research is meant to “make the familiar strange” (Hockey, 1993: p.208). This is ensured by sufficient distance in time and management level to allow analysis by my outsider half of this case study.

This single case study describes, analyses and interprets the relationships between one reform’s start and its finish. This includes its implementation agents and the conclusion of that reform, although it is not typical of the seventeen reforms (figure 1, chapter 2) nor a sample (Mishler, 1990; Stenbacka, 2001). The MfR reform of program evaluation was chosen because it began in 1984 and ended by 1996 and its non-financial performance requirements are being repeated after a gap of seventeen years, in the current PGPA Act. As an instrumental case study, that MfR reform was “examined to provide insight into an issue or refinement of theory” (Stake, 1994: p.237). This research provides an in-depth examination of the context beyond the casual observer and helps refine implementation theory as it may have been applied to APS reform.

The trustworthiness of this research is ensured through the triangulation of information sources and including acknowledged investigator bias. Such bias may be of three types: (1) a “tendency for the researcher to see what is anticipated; (2) …choose relatively small and excellent examples (of what is being studied)… (3) occurring unconsciously in the research design. The questions may be biased” (Morse, 2015: p.p.1215-1216). The potential for this bias is addressed in section 5.6. The context to understanding the coming and going of a single management reform is in section 5.5. That reform demonstrated APS effectiveness through program evaluation and was implemented but not maintained over time, nor apparently made permanent or effective for the long-term (Blackman, 2015; Pal & Clark, 2015; Pollitt, 2013). It represents a limited, observable and measurable case study. The study illustrates the initiation of a reform, its considered implementation and the length in time of its associated change, examining why it did not become embedded in permanent APS practice. It is a practice-based single case study.

Case study research expands theory. It adds to existing knowledge and helps build theories, as a defensible methodology when carried out with appropriate care (Eisenhardt, 1989; Stake, 1978; Tellis, 1997). A single case study can test established theory and add to existing Page | 102

knowledge, while testing a particular proposition (Flyvbjerg, 2006; Rowley, 2002). This thesis examined the implementation of the 1980s APS management and performance reform of program evaluation and tracked those initial changes, partially through interviews with stakeholders and researchers of that reform. It seeks to make a contribution to current theory through a component of extended time, as case studies are preferred to answer how and why questions (Yin, 2014). Context is now provided to the experiences of the interviewees, with background on demonstrating APS program performance.

5.5 Context: Demonstrating APS Program Performance This background provides a long-term context to the experiences of those interviewed. As the main component of the 1980s MfR era, evaluation was “one of the critical tools available to assess program performance” (ANAO, 1997: p.xi; Keating, 1990). A rationale for that management reform had been the “requirement that ‘performance’ is measurable and reported via indicators, and that programmes are evaluated” (Guthrie & English, 1997: p.155). Program evaluation was introduced in 1988 and mandated for all APS agencies through Portfolio Evaluation Plans (PEPs).

These PEPs were no longer required in 1996, although program evaluation was not formally abolished. This occurred because the new Government and Finance Department Secretary instead prioritised introducing the new financial reform of accrual accounting (Mackay, 2011). The absence of this reform of program evaluation and its application in demonstrating the effectiveness of APS program performance has received only limited attention (Campbell, 2001; Hawke, 2012; Mackay, 2003; Mackay, 2011; Taylor, 2006). Notably, both Mackay and Hawke were senior APS managers28 in the Finance Department’s original implementation of that MfR reform in the 1980s and illustrate the value of the practitioner-based comments in this research. Their later commentaries in 2011 and 2012 also illustrate the long-term perspective adopted in this thesis.

After a gap of seventeen years between 1996 and 2013, APS agencies are required to demonstrate their effective non-financial performance. Under the Public Governance, Performance and Accountability Act 201329, s.38 (1) requires that “the accountable authority

28 As Senior Executive Service managers in the Finance Department during the MfR implementation period. 29 At https://www.legislation.gov.au/Details/C2016C00414 Page | 103 of a Commonwealth entity must measure and asses the performance of the entity in achieving its purposes”. As Morton & Cook (2018) outlined, evaluation is now considered an important means of demonstrating that non-financial performance. This compares with the earlier means of assessing agency performance by the MfR element of program evaluation. Perceiving such a link may indicate potential researcher bias, which is considered next.

5.6 Potential Researcher Bias This section considers the potential for researcher bias and expands on the three previous types of Morse (2015). A tendency for the researcher to see what is anticipated might occur from my APS employment throughout this period of those reforms between 1976-2013 and my contributions to the later implementation of this program evaluation reform and other APS reforms. The context of this potential insider/outsider bias is my professional background. This has provided me with insights into this particular reform and allowed me to bring expert knowledge to this study, which heightened my reflection (Le Gallais, 2008). Alternatively, this illustrates my familiarity with the study domain, to which I have contributed in the following eleven ways.

These contributions were made through relevant developments of agency or APS policy and practice, the Institute of Public Administration Australia (IPAA) and the Australian Evaluation Society (AES). These eleven contributions are: (1) while a member of the Industry Department, I was published in the Evaluation Journal of Australasia (1992); (2) as a Project Officer for the Management Advisory Board’s APS policy guideline No.8, “Contracting for the Provision of Services in Commonwealth Agencies” (MAB, 1992); (3) undertook an internal evaluation of a University unit funded by the Health Department; (4) co-authored the Health Department’s Program Evaluation Guidelines (2006); (5) was an initial Assessor (reader) of Departments’ accountabilities during the 1990s and 2000s, for the APS Annual Report Awards made each year by the IPAA, ACT branch; (6) was a final Judge for IPAA ACT of those Annual Report Awards (2014); (7) I contributed as a member of an AES Working Group preparing the Society’s input to the Department of Finance’s initial implementation of the PGPA Act (2015); (8) While undertaking this research, I was a member of the moderating panel recommending final abstracts, keynote speakers and seminar presenters for the International Conference of the AES in Canberra (2017); Page | 104

(9) I contributed to the AES’s submission to the Independent Review of the Implementation of the PGPA Act and Rules (2017); (10) member of Working Group which drafted the AES submission to the Prime Minister’s Review of the APS (2018); (11) I have maintained my professional competencies during these periods by current memberships of IPAA (since 1992) and the AES (since 2006).

The second potential bias was choosing relatively small and excellent examples. Any such potential bias in my choices of interviewees was addressed by the selection process. Those selected included a mixture of participants in and out of the APS, who had no background in the 1980s MfR reform or formal evaluation, or were currently experiencing the implementation of the PGPA Act. Research input was obtained by interviews with three academics with relevant backgrounds in public administration and program evaluation. My personal reflexion was balanced by interviews with personnel from differing APS levels (see figure 3, chapter 2) or who were external to the APS, while being triangulated with analysis of both relevant research and practice-based literature (Guba, 1981). My qualitative research addresses these questions of potentially enhancing frameworks for long-term implementation of APS reform. The study design follows, together with its sampling and data-gathering methods by interviews and documentary analysis. To address the third possible bias in the research design of biased questions, the first draft of the interviewee questions was tested for their comprehension and topic relevance, accompanied by initial comments and two senior perspectives from both research and practice, through the following two pilot interviews.

5.7 Pilot Interviews Two pilot interviews were conducted in early 2016. These were conducted to address any potential criticisms of subjectivity in interviewee questions and identify any gaps in the design of this research (Sampson, 2004). The first interview was with a former APS Secretary of multiple agencies and the second was with an academic in the public administration field who had published extensively on management reform and implementation practice. Both agreed in writing to participate. The purpose was to obtain their initial comments on this research and the interviewee questions, from their joint perspectives of senior management practice and research. This feedback resulted in some small changes to the interview questions. As both were in the same city as the researcher, this geographic proximity facilitated arranging interview contacts and providing feedback (Yin, 2014). This provided an initial balance between theory Page | 105

and practice, for clarifying any issues of uncertainty (Lancaster, 2015). Prior to both the pilots and the following interviews, five background documents were sent to each person. These were: (1) an Invitation to Participate; (2) a Participant Information Statement and Consent form; (3) a one-page outline of the research project; (4) a copy of the UNSW Canberra Ethics Committee approval; (5) the final interview questions. These are attached in Appendices 2-6.

5.8 Interviewee Selection The selection of the initial interviewees was through purposive sampling of those with the greatest potential to understand this issue (Palys, 2008) and was “based on the judgement of the researcher as to who will provide the best information to success for the objectives of the study” (Etikan & Bala, 2017: p.215). They were selected for their internal and external perspectives on APS management reform, past and present, and agreed in writing to participate. This purposive sample process targeted knowledge about the management reform of implementing program evaluation through participants who had been promoters, APS users, advisers or assessments by external agencies and researchers. Given the variety of individuals in those groups, initial participants were sought from published material and the researcher’s personal knowledge. That initial sample was then extended through snowball sampling: asking for further nominations from that initial group (Goodman, 1961). Subsequently, this sampling was expanded to mean “a study sample through referrals made among people who share or know of others who possess some characteristics that are of research interest (Biernacki & Waldorf. 1981: p.141). Snowball sampling was used, as it is regarded as “the most widely employed method of sampling in qualitative research in various disciplines across the social sciences” (Noy, 2008: p.330). This was further clarified in the more-specific meaning of “snowball sampling in hard-to-reach populations” (Goodman, 2011: p.347). Such purposive sampling allowed the emerging insights to be maximised by a widening range of invitees and their potential viewpoints (Guba, 1981). The method of choosing the interviewees though that snowball sampling follows.

Snowball sampling began from a list derived from the researcher’s personal and professional knowledge. Those first interviewees were chosen through their publications on public sector reform implementation, the researcher’s personal knowledge of their place in APS reforms and their availability. The participants had previously been involved in any of these processes: (1) initiating this evaluation reform; (2) its implementation and use within the APS; (3) its subsequent external review; (4) expert commentary; or (5) relevant research. Each interviewee Page | 106 was asked for further nominations from others who had similar backgrounds or research interests in implementing APS management reform (Biernacki & Waldorf. 1981) and, if identified, for that first interviewee to approach her or him and alert them to this research. This also enabled the case study to be manageable in time and finances. Sampling then proceeded to the other individuals nominated by those initial participants,

Those interviewed were not statistically representative of all the APS staff in thirty-five years. No single listing of all those individuals exists to constitute a population (Lu & Henning, 2013). Since the start of these reforms, that extended time of thirty-five years has combined with an ever-changing number of APS staff in each of those years; e.g. from 115,585 in 2002 to 152,594 in December 2017 (APSC, 2018a). Staff contact details do not exist in any centralised APS data-base to be a population for sampling and the individual contact details of either past or current staff are not available for privacy reasons. Also, sixty-two per cent of APS staff are located outside Canberra in the regional offices of their agencies and would have been physically difficult to access, indirectly representing the difficulties of Goodman’s “snowball sampling in hard-to-reach populations” (Goodman, 2011: p.347). Most interviewees were located in Canberra and represented a top-down central office perspective, rather than that of street-level bureaucrats (Lipsky, 1980). This resulted in thirty-two interviews, mixing the views of former Secretaries, former and current SES managers, former and current APS staff, with other external non-APS consultants. According to Morse, “researchers cease data collection when they have enough data to build a comprehensive and convincing theory. That is, when saturation occurs” (Morse, 1995:p.148). Sampling was continued until observations were being repeated and thus saturation was apparent in those interviewer comments.

Saturation occurs when no new information is being found, nor any value being added from these participants’ responses (Morse, 1995). The concept of saturation has been criticised as being nebulous and lacking an underlying theory, with its explanation in the research being essential (Bowen, 2008). Such saturation might be achieved as early as twelve in sixty interviews, or with a minimum of fifteen (Baskarada, 2014; Guest et al., 2006). Not clearly discussed in snowball sampling has been the question: how much data are sufficient? One answer to that question of ‘how many’ has been “it depends” (Baker & Edwards, 2012: p.42) and best left to the researcher’s judgement (Fusch & Ness, 2015; Yin, 2014). In reviewing the implications of the saturation occurring, instead of the number of the interviews, greater emphasis was placed on the richness of the interview data (Morse, 1995). This richness was Page | 107

found to vary with experiences of reforms past and present, in management, APS operational levels and research.

After the two former Auditors-General, four former APS Secretaries, the two former APS Commissioners and two current Professors of Public Administration (following figure 9) were interviewed, certain common implementation themes and similar key individuals were being repeated. Those preliminary conclusions were then cross-checked with lower levels of both former and current APS staff. Analysis was rated as saturated (Eisenhardt, 1989), when no new open NVivo codes were generated. This was the preliminary result from the following thirty- two interviews that were conducted in 2016.

5.9 Interviews Conducted Between April and November 2016, forty-five invitations were made. Thirty-two were accepted, eight were declined and the remaining five were ‘no response’. The interviewees’ backgrounds are set out in Figure 9. The cumulative total is greater than thirty-two, as several of those interviewed had had successive but professionally distinct career positions. Figure 9 Backgrounds of Interviewees FORMER CURRENT OTHER

Ministers = 2 APS: Academics = 6 Auditors-General = 2 SES = 3 AES = 1 APS Secretaries = 4 EL 2 = 2 7 APS Commissioners = 2 EL 1 = 2 SES = 5 Consultants (ex APS) = 5 Executive Level 2 = 6 “ (non-APS) = 2 TOTALS 21 14

This mixture acknowledged the multiple realities in implementing APS management reform.

All interviewees were invited to expand on the original questions, which contributed further personal conclusions and richer analysis. Some interviewees were also able to provide joint perspectives from both past APS practice and current, relevant research. Also included were interviews with implementers of the current management reform, the PGPA Act. Interviews with serving APS staff were undertaken as they became available, serving as a partial control group unfamiliar with past practices of evaluation but familiar with those under the PGPA Act. This method provided opportunities to include operational links with the most recent implementation of reform in APS performance and effectiveness. All interviewees were advised their comments were off-the-record and no interviewee declined to be recorded. Page | 108

Interviews were recorded and conducted in three locations. These were in the interviewee’s personal Canberra office (21), or a private room at UNSW Canberra (8), or video-conferencing by Skype (2) with those interviewees located interstate. There were also email exchanges with a former government Minister who had declined to be interviewed because of professional workloads. Interviews were semi-structured and open-ended, establishing the context of the interviewees’ experiences with management reform and any of their APS responsibilities or research interests. The eventual thirty-two interviews mixed Government and APS experience with relevant research backgrounds.

Those interviews drew upon the experiences of participants in implementing APS management reforms, either in the past or currently with the PGPA Act. Research balance was provided by interviews with three academics having extensive research interests in public administration, reform and evaluation. This identified further research issues. The participants included those who had led or experienced the changes of the MfR reform during the period of 1984-1997. Additional insider/outsider perspectives were provided by former APS staff who had been at various working levels below Secretary in the APS and were now with other current professional interests outside the APS, e.g. private sector consultants. Those former APS staff included those with and without direct experience of APS reform after 1997.

Interviewees included those involved with later management reforms. This included ‘Ahead of the Game’ (AGRAGA, 2010; Lindquist, 2010) and those implementing the current reform of APS accountability: the Public Governance, Performance and Accountability (PGPA) Act 2013 (Finance, 2014). In recognising the differing experiences and responsibilities of the participants, I trusted the memories of all interviewees and their abilities to reflect in a systematic way. This provided the perspectives of public sector practitioners over time. Interviews lasted between one and two hours and were later transcribed by a professional transcription service. Each interviewee received a copy of this transcript, which was the basis of further analysis.

Analysis started from the first interview. Such analysis provides the basis for later development of theory and highlights the importance of the insights from that first interview: the pilot interviews (Corbin & Strauss, 1990). Further benefits from comments by those interviewed in the pilots included testing for unanticipated impacts and any potential deficiencies in the design of this research (Sampson, 2004). This research has sought to explain the implementation of Page | 109 one management and performance reform, describe how its principals acted and the impacts that flowed from their actions (Corbin & Strauss, 1990). It will help advance implementation theories (Eisenhardt, 1989), which currently lack a look-back over extended time or evaluation component. These interviewees provided the distance of Hockey (1993) and alternative multiple perspectives, by input from APS practice, research, the professions and accountability to Parliament. This enabled triangulation of my conclusions and establishing their relevance.

5.10 Triangulation Theory may be developed from a case study by cross-comparison with practice-based sources. Using different theories and methods in developing research conclusions assists in this triangulation (Guba, 1981; Eisenhardt, 1989). This provides a wider perspective of historical developments and supports “converging lines of inquiry” (Yin, 2014: p.120). Reforming the APS means changing the behaviours of many APS staff in multiple departments, where most of those staff (sixty-two percent) are geographically located in regions at distance from the Secretaries and headquarters in Canberra (APSC, 2018a). Consequently, implementing management reform includes the systemic changes in APS organisational practices and requires more than initiating a top-down management reform. This thesis assesses whether implementation theory contains frameworks that combine systemic agency change from management reform, with the next steps after initiating this change and the links with embedding the subsequent and measurable long-term impacts. This involves cross-disciplinary analysis through the following four frameworks.

This study draws on four fields. These are organisation research over extended time (Pettigrew, 1990; Ployhart & Vandenburg, 2010); change management (Fernandez & Rainey, 2006; Gill, 2002; Kotter, 1995; Stewart & Kringas, 2003); evaluation (Leithwood & Montgomery, 1980; McLaughlin & Jordan, 1999; Scheirer, 2012; Wholey, 1987), plus implementation (Baehler, 2007; Blackman et al., 2013; Lindquist et al., Eds, 2011; Marsh & McConnell, 2010; O’Flynn, 2015; Pal & Clark, 2015; Wanna, Ed. 2007). These provide expanded frameworks underpinning this case study of examining the APS management practice of program evaluation. To examine later APS implementation practice, further management reforms and their rationales were included (e.g. APSC, 2014a; DPM&C/ANAO, 2006; Finance, 2013; Moran, 2013; Tanner, 2008; Tune, 2010). This identified any re-occurring practice-based themes of ‘effectiveness’, ‘embedding’, ‘performance’ or over-lapping rationales for public sector management reform. Page | 110

APS publications about those management reforms were sources of further analysis. In the past thirty-five years, there have been a continuing series of APS reforms (figure 1, chapter 2). There have been many practice-based documents concerning the initial implementation of those reforms, their subsequent use across the APS, later analysis by academics, plus eventual review by Parliamentary Committees and the Auditor-General on the later effectiveness of the APS in delivering programs. Analyses of these multiple documentary sources enabled any interim conclusions to be tested and avoid perceptions of researcher’s bias (Stake, 1994). Initial documentary analysis included contemporary publications, commentary and research about the use of program evaluation in the APS, or later initiatives associated with similar reforms involving demonstrating effective APS performance. A summary of these sources follows.

The following eight authors of publications on APS management reform were utilised in this triangulation. They relate to the central influences on the implementation of APS management reforms, their subsequent practise and other external reviews:

1. Department of Finance. This Department was the agency implementing the 1980s MfR reform, a later similar performance reform known as “Operation Sunlight” and now the current PGPA Act 2013 (Finance, 2008; Finance, 2017a; Keating & Holmes, 1990; Tanner, 2008).

2. Management Advisory Board (MAB). Tasked with APS management coordination and service-wide policy implementation, the MAB issued a series of centrally-endorsed APS policies and practices totalling twenty-one between 1991 and 1996. These publications were also intended to provide APS members with “insights to help with managing change in their organisations” (MAB, 1992: p.iii). Created in 1987, the MAB was chaired by the APS Head: the Secretary of the Prime Minister’s Department. Examples of its policy statements or models of best practice included risk management, accountability, and asset management (NLA, 2017; Verspaandonk et al., 2010). In 1996, it was replaced by the Management Advisory Committee (e.g. MAC, 2010), functioning similarly and with all Secretaries as members. In 2012, MAC was replaced by the Secretaries Board with similar APS-wide responsibilities (APSC, 2012). Publications utilised in this thesis were contracting out of program delivery (MAB, 1992), Building a Better Public Service (MAB, 1993), functions of the SES in the One APS - One SES (MAC, 2005) and Empowering Change in the APS (MAC, 2010). These documents provided background on the intentions of a variety of top-down APS reforms.

Page | 111

3. Cabinet Implementation Unit (CIU) – Prime Minister’s Department. Created in 2006, the CIU monitored Government decisions to ensure they were subsequently implemented by APS agencies on time, on budget and to expectations (Wanna, 2006). Structurally, it was abolished in 2015 (Gold, 2017). There are seven toolkits remaining on APS implementation planning, evaluation and management. These are, however, located in cyberspace30, without any direct connections to advisory staff of that formerly discrete Unit.

4. . The inter-governmental relations between the Commonwealth and the constituent States generally (and the reform implementation relationships in particular) have not always been uniform, changing from coordinate, to cooperative, to centralist and collaborative, while leading more recently to centralised executive federalism (Phillimore, 2013; Wanna & Weller, 2003). The 2008 Intergovernmental Agreement on Federal Financial Relations, requires national Implementation Plans31 for the resulting National Partnership Agreements with the States and Territories. These Agreements set out the means of achieving national outcomes in specific sectors such as health, education and housing. They illustrate the significance of the Australian national Government in implementing nationally-uniform programs through the subsidiary States and Territories.

5. Australian National Audit Office (ANAO). As another source of centralised guidance to the APS, the ANAO issued APS ‘Better Practice Guides’ for several decades on (e.g.) Annual Reports (ANAO/DoFA, 2004); policy implementation (ANAO/DPM&C, 2014); fraud control; asset management; contract management; implementing/administering grants. Since 2011, the ANAO has been also authorised under Section 18A of the Auditor-General Act, 199732 to audit definitions of agency performance: Key Performance Indicators (KPIs). The ANAO also audits the later actual implementation by agencies of its earlier recommendations and is able to audit the Annual Performance Statements required under Section 15 of the PGPA Act 2013 (ANAO, 2013a; ANAO, 2017b). These are heightened roles for the ANAO and reflect a renewed central concern for evaluating the actual outcomes from claimed acceptance of its Reports.

30 At http://www.dpmc.gov.au/government/policy-implementation 31 Federal Finances Circular 2015/02: www.federalfinancialrelations.gov.au/content/circulars/Circular_2015_02.pdf

32 Source: https://www.legislation.gov.au/Details/C2016C00685 Page | 112

6. Parliamentary Committees. Those documents examined were Reports by the House of Representatives Standing Committee on Finance and Public Administration, the Senate Standing Committee on Finance and Public Administration (SCFPA) and the Joint Committee on Public Accounts and Audit (JCPAA). These Committees have particular responsibilities for examining national public administration and regularly review APS accountability, e.g. by reviewing Annual Reports. (e.g. JCPAA, 2002; SCFPA, 1990; SCFPA, 2007). The latter JCPAA also has “an active general oversight role” under the PGPA Act33 and is thus a key Parliamentary agency reviewing APS performance and accountability.

7. Australian Public Service Commission (APSC). The APSC reviewed each APS agency between 2012-16, by capability reviews of how an organisation aligns its processes, systems and its people’s expertise to deliver on objectives. With twenty-two completed, they are a comprehensive overview of APS practices (APSC, 2013f; APSC, 2016). These reviews were evidence of agency-by-agency capabilities, independent of Secretaries’ management.

8. Speeches by Departmental Secretaries. These are views from the top by members of the Secretaries Board. This Board “sets the overall direction for the APS, drives collaboration, prioritises collective resource use to achieve cross-boundary solutions and gives priority to the creation and maintenance of a One-APS shared culture” (APSC, 2014a). They are guides to departmental management priorities and practice, e.g. Finance (Tune, 2010); Infrastructure (Mrdak, 2015). They are especially relevant when made by the APS Commissioner (Briggs, 2007a & b; Lloyd, 2017a & b; Podger, 2004; Sedgwick, 2011a), or the Secretary of the Prime Minister’s Department (Moore-Wilton, 1999; Moran, 2013; Parkinson, 2016). That latter position is significant in implementation, as the APS Head and Chair of the Secretaries Board.

The practitioner interviews were triangulated against documents from those eight sources, which intersected with three reform factors: the intentions of top-down reform; the experiences of APS managers staff in the reform processes; the actual outcomes of APS reform practice, as tested by Parliament and the ANAO. Those eight sources provided rich, practice-based information. They supported the creation of the case study by providing five advantages: (1) establishing the APS context; (2) highlighting further questions for analysis; (3) creating additional data; (4) a means of tracking operational changes in the research subject; while being (5) a way of validating or cross-checking information from elsewhere (Bowen, 2009). Those

33 http://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Public_Accounts_and_Audit/Role_of_the_Committee Page | 113

documents added contexts to the uses of program evaluation or other demonstrations of effectiveness, reform timelines, policy changes, key agents of influence in APS change and contemporary research. The analyses also assisted examining if APS reforms were a series of inputs and not producing demonstrable and embedded impacts (Prasser, 2004). This enabled examining (1) the independent assessments subsequently made of those reforms (e.g. by the ANAO) and (2) any of the later assertions of the need for APS reform, against (3) the earlier claims of implementation success.

The assessments of implementing the MfR evaluation reform used in this thesis were derived from semi-structured interviews, asking the same initial questions with the intention of producing comparable data. Such interviewing can draw out unanticipated responses and permit interviewees to enhance the original questions with their unique experiences. Through this two-way means of communication, responses and data can be optimised, representing multiple experiences and knowledgeable commentary by insiders and observers with differing viewpoints (Eisenhardt & Graebner, 2007; McIntosh & Morse, 2015). Those two categories of responses formed part of the research text and interviewee responses.

It is also important to provide a separate means of interrogating these responses. This facilitated understanding how the evidence was established (Yin, 2014). The chain of evidence commenced with the citations associated with the initial analysis and interim conclusions, then set out the final questions used at each interview, the methods of choosing each interviewee, how and where the interviews were recorded, then the professional transcripts made of each interview. The associated database has been stored in the UNSW Data Archive under secure password-protected access. The interview results were analysed using NVivo, as detailed in the following paragraph 5.11 Data Analysis Procedures.

5.11 Data-Analysis Procedures There are three elements in data-analysis: data reduction, data display and drawing conclusions. Qualitative data can be sources of “well-grounded, rich description and explanation of processes occurring in local contexts” (Miles & Huberman, 1984: p.21). This is the basis of theorising from a case study, although “a huge chasm often separates data from analysis” (Eisenhardt, 1989: p.539) because of a lack of considered discussion connecting the two. This chasm may be bridged by creating memos as the interviews and recordings progress, identifying any theory or emerging variation from it and assisting in the formal reporting of the Page | 114 research (Corbin & Strauss, 1990). The interview transcripts were analysed with NVivo 11 Pro software, to code and link the broad categories of responses as outlined by the interviewees. This was the first step to more rigorous identification of themes and insights, enabling later rounds of coding to bring out highlights and concepts in the testing and development of theory (Saldana, 2009). Following Corbin & Strauss (1990), three types of codes were generated: open (interpreting the initial data and providing new insights); axial (relating categories and sub-categories); selective (or, unifying all codes around core themes).

The initial questions were standardised (appendix 6). It was also important to be mindful of other interview themes providing unanticipated insights and additional research directions. This occurred. Each interviewee was given a personal code (appendix 7), which facilitated the maintenance of anonymity in the coding of their answers. Answers were derived from issues self-selected by interviewees, with the initial questions generating the initial open coding (Corbin & Strauss, 1990). This identified themes inductively as presented by the participants, avoiding any bias from the investigator’s past APS history and participation in the reforms. Finer analysis of the answers subsequently allowed greater discrimination in understanding practice-based meanings of management reform in the APS.

All the participants were aware the research was focussed on APS management reforms. The major question (whether this management reform of program evaluation became embedded in the APS) was raised in interview Question 8: What practices of embedding a reform in the APS have you read/written/practised ? Interviewees were able to consider this question and their answers resulted in the multiple occurrences of Open Code 4: Reform-Embedding. This major research question had been reinforced by forty per cent of the questions provided in advance containing the word ‘reform’. Initially, broad categories such as ‘performance’, ‘reform’, ‘government’, ‘APS’ were identified. After several of the transcripts had been reviewed, it became noticeable that more complex meanings of those words from APS practice were emerging via those responses. As Strauss & Corbin have also observed, the researcher ideally applies creativity in analysing data: “creativity manifests itself in the ability of researchers to aptly name categories, ask stimulating questions, make comparisons and extract an innovative, integrated, realistic scheme from masses of unorganised raw data” (Straus & Corbin, 1990:p.13). By cross-referencing key elements from the broad categories of the open codes, I was able creatively to apply my insider perspective which was summarised in paragraph 5.6 Potential Researcher Bias. Deeper refinements were then developed. Page | 115

Those refinements were developed from such broad categories as ‘Reform’ (for example as ‘reform-change’), ‘Performance’ (as ‘performance-long-term’), Government (‘-influence on reform’) and Management (‘-SES’). Because the interviewees’ responses were not limited to direct answers to each question, their answers were coded in multiple categories. For example, comments about a particular Minister’s influence on a Secretary during reform were coded both as ‘Government – influence on reform’ and particularly as ‘Ministers - advice to or directions from’. Appendix 8 sets out the bases for the resulting forty-three open codes. Those forty-three open codes are set out in Figure 10 and are ranked in order of most-frequently occurring. These occurrences and their frequencies begin to identify the priorities of those interviewed and the significance of those factors in implementing management reform. Page | 116

Figure 10. Open Codes and Ranked Order Times mentioned/inferred 1. APS - culture and roles 156 2. Performance - including evaluation 133 3. Performance - framework 82 4. Reform - embedding 80 5. Government - influence on reform 71 6. Reform - change agents 71 7. Management - Secretaries incl existing and later commentaries 71 8. Performance - long term importance or absence 64 9. Reform - implementation in terminology 64 10. Performance - central agencies' influence 61 11. Reform - PGPA Act 64 12. Management - SES 55 13. Ministers - policy advice to or directions from 49 14. Accountability 45 15. Reform - basis of 42 16. Reform - developments - external 36 17. Performance - ANAO Efficiency, Performance, BP Guides 35 18. Parliament 33 19. Reform - systemic 33 20. Ministers - Secretary relationships 32 21. Reform - Agenda 30 22. Reform - FMIP or managing for results (specifically) 30 23. Research - impact of 29 24. Reform – series 25 25. Reform - history 23 26. Reform - program logic 23 27. Government - responsiveness 23 28. Reform - evaluation or lack of 21 29. Prime Ministers - influence 20 30. Reform - developments - internal or progressive 19 31. Reform - implementation theory 15 32. Policy - strategic use in APS 15 33. Performance - joined up government: CW and S and Ts 14 34. Reform - Capability Reviews 13 35. Policy - versus delivery 13 36. Highlighting existing problems 9 37. Annual Reports 9 38. Performance - Annual reports 8 39. Interviewer - personal 5 40. Performance - paradigm shift (mentioned) 5 41. Government - joined up 4 42. Independent advice about organisation 3 43. Transparency 3

This list also begins to identify the different influences on the implementation of management reform in the APS and the consequent priorities perceived by those affected by reform.

Closer attention was then paid to those initial forty-three categories. This was undertaken through the coding paradigm of “conditions, context, strategies and consequences” (Corbin & Page | 117

Strauss, 1990: p.13) and axial coding. While providing additional depth to the research, excessive use of this model can distract from the insights for addressing the research question and more-detailed coding is still required (Kendall, 1999). This axial coding identified sub- categories from the open codes and began to allow testing of relationships. Figure 11 shows these nine axial codes.

Figure 11. Nine Axial Codes GENERAL 1. Accountability GOVERNMENT & MINISTERS 2. Annual Reports 8. Government – influence on reform 3. APS – culture and roles 9. Government – joined up 4. Highlighting existing problem 10. Government – responsiveness 5. Interviewer - personal 11. Ministers - Policy advice to 6. Independent advice about organisation or directions from

7. Transparency 12. Ministers - Secretaries, relationships with MANAGEMENT 13. Secretaries, existing & later commentaries PRIME MINISTERS 14. SES 15. Influence POLICY PARLIAMENT 17. Strategic use in APS 16. Committees and Reports 18. Versus delivery PERFORMANCE 19. ANAO: Efficiency & Performance; BPGs 20. Annual Report, including IPAA Judges’ Reports. 21. Central Agencies 22. Framework 23. Including Evaluation 24. Joined up government (CW & S/Ts) 25. Long-term importance or absence 26. Paradigm shift (mentioned or not) REFORM 27. Agenda RESEARCH 28. Basis of 43. Impact of 29. Capability Reviews 30. Change agents 31. Series 32. Developments – external 33. Developments – internal or progressive 34. Embedding 35. Evaluation 36. FMIP or managing for results, specifically 37. History 38. Implementation – in terminology 39. Implementation – theory 40. PGPA Act 41. Program logic 42. Systemic Those axial codes represented basic, organising and global themes (Attride-Stirling, 2001).

Those axial codes were not explicitly utilised as analytical headings, but facilitated more detailed exploration of the texts from interviews. For example, the axial code ‘Reform’ Page | 118

contained ‘change agents’, which will be summarised in paragraphs 6.6 Change Agents in Implementing APS Management Reform and 6.7 APS Secretaries as Change Agents: Necessary but not Sufficient. They will then be further analysed in the following four chapters: 7.5 Managing Whole-of-APS Change; 7.6 APS Secretaries as Agency Change Agents; 7.7 Differences between Change Agents and Change Management; 7.8 Change Management Frameworks, Geography and Extended Time. The themes which result will be employed in Figure 19, to differentiate between APS Secretaries and staff in their priorities.

This coding established eight themes relevant to implementation systems, critical events, change management, key features of successful reforms, key unsuccessful reform features. The following Figure 12 groups these eight themes:

Figure 12 Research Themes Summarised

Number of Number of Theme Occurrences Open Codes Recorded

REFORM 16 582 PERFORMANCE 8 402 GENERAL 7 230 GOVERNMENT 6 199 AND MINISTERS

(including Prime Minister) (1) (20) MANAGEMENT 2 126

PARLIAMENT 1 33 RESEARCH 1 29 POLICY 2 28

Those themes provided contexts to answering the research questions. By drawing on the various respondents’ academic, APS management, policy, administrative and private sector backgrounds, their answers provided extensive contexts from both research and practice. The ten interview questions (Appendix 2) were derived initially from implementation theory, but had been extended to include evaluation practice and subjective perceptions of success. The initial responses established these following broad themes for examining management reform: performance; general; Government and Ministers; management; Parliament; research; policy.

5.12 The Trustworthiness of this Research The trustworthiness of this research has been reviewed through the four criteria of “credibility, Page | 119

transferability, dependability and confirmability” (Guba, 1981: p.80). The application of these criteria to this research was set out in figure 7. This research was approved by the UNSW Canberra Ethics Committee, with approval HC 15871 of 26 February 2016. Interview participants were able to choose to be either anonymous or named. All information and data were held securely in confidential UNSW Canberra sources. Transcriptions were made of these interviews and formed a significant part of this research.

This research draws upon both theory and practice, which was designed to establish the credibility of this thesis. Theory was referenced with two Professors of Public Administration, plus a lecturer in program evaluation. Senior APS management practice was referenced with four former Secretaries, two former Australian Public Service Commissioners and two former Auditors-General. Interviews were conducted with key change agents, both past and present, involved with implementing reform requirements to demonstrate APS performance (meaning program delivery and agency effectiveness).

The conclusions of this research are potential transferable to current APS reform practice. This single management reform (program evaluation through Portfolio Evaluation Plans) could be regarded as specific in its commencement, its implementation and the effects of its withdrawal. By contrast, there are current performance-related reforms of the APS, in both the implementation of the PGPA Act and the review of the APS as to whether it is fit for purpose. This research has sought to provide lessons about implementation theories and practice that may have future applicability at the national level.

Careful analysis was undertaken to ensure the dependability of this research. It was cross- referenced with the published viewpoints of APS Secretaries past and present, plus expert commentary by agencies such as the Australian National Audit Office and Parliamentary Committees. Overlapping methods included drawing upon interviewees from outside the practice and research worlds. Notes taken during the interviews are also available for audit, together with the raw transcripts of those interviews.

However, the APS has varied in its staff numbers during these periods of reform between 1984 and 2013. The interviewees were not sampled in a statistically-defensible manner from any notional population(s) of APS staff. This was not possible owing to reasons of population definition and staff stability, staff privacy and researcher workload. It may be that views of Page | 120

this reform might differ between the descending levels of the APS: Secretaries; Senior Executive Service (three levels); Executive Level (two levels); Administrative Service Officer (six levels). Efforts were made to conduct interviews from each of these levels. The research was limited to the Commonwealth context and further research into State or Territory jurisdictions might be of comparable interest.

5.13 Conclusions This methodology provides a framework for examining the main research issue: whether this management reform of program evaluation become embedded in the APS. There are three further operational questions: (2) what is the role of public sector change agents in embedding APS management reform? (3) how can change management frameworks explain the challenges of implementing APS reform policies? (4) what insights might be learnt from applying a lens of extended time to implementation theory and examining how reforms endure? The method outlined commenced with interviews with key participants in the evaluation reform of the 1980/90s, to establish how it was implemented and whether any implementation theory was utilised. These interviews were cross-referenced with subsequent analyses at that time by both researchers and practice-based commentaries. This analysis also included the PGPA Act reform currently requiring ‘performance’ to be demonstrated, contributing new knowledge by extending implementation and change theories to the practice of “embedding”.

The following Chapter 6 draws on these results and themes to examine the practice of management reform in the Australian Public Service. It examines the outcomes of program evaluation in the management for results reform. It will also assess the place of change agents and whether reform has resulted in permanent change, together with the initial implications for implementation theory. It introduces the distinction between evaluation as a case study of reform and evaluating the effectiveness of reform implementation.

Page | 121

Chapter 6 The Practice of Management Reform in the APS

6.1 Introduction This chapter presents the analysis of findings relating to the major question of whether the management reform of program evaluation became embedded in the APS. This was the claim made in 1992 (TFMI, 1992) that was overshadowed by the “urgent need to press home the changes and embed them more firmly in the working culture of the Australian Public Service (MAB, 1993:v). That question is answered through interviewees’ responses that were complementary to these three supplementary questions: (1) what is the role of public sector change agents in embedding APS management reform? (2) how can change management frameworks explain the challenges of implementing APS reform policies? (3) what insights might be learnt from applying a lens of extended time to implementation theory and exploring reforms that endure? These questions are important, in both practice and theory. In APS practice, implementation should be a top-level priority, because it is not “a set and forget linear approach to policy making” (Parkinson34, 2016). The results of policy-making are not always discernible. Even after thirty years, a former Australian Public Service Commissioner and Secretary has concluded that “making accountability for results really work” (Podger, 2018) remains an APS work in progress. A review of the implementation of the PGPA Act 2013 recently asserted: “It is not clear why evaluation practice has fallen away, but it can be reinvigorated through attention from the top” (Alexander & Thodey, 2018: p.14). As the top agency responsible for implementing the PGPA Act, the Finance Department has concluded “evaluators will become a critical element of a broader community of Commonwealth performance professionals” (Cook & Morton, 2018: p.161). In its priority of demonstrating the non-financial performance of APS agencies, the management reform of the PGPA Act book-ends the major question above. Their common performance factor utilising program evaluation also operationalises ‘longitudinal’ (Pettigrew, 1990) as extended time, as they are separated by extended time of some twenty years.

Implementation frameworks currently lack a factor of embedding public sector management reform. Initially framing crude differences between reform intent and outcome (Pressman & Wildavsky, 1973), implementation theory developed to a flow diagram of the processes in

34 Dr is the current Head of the APS in 2019, as Secretary of the Prime Minister’s Department. Page | 122 between (Sabatier & Mazmanian, 1980), to relative maturity (O’Toole, 2000), to the conclusion that a potential third-generation paradigm did not currently exist (Saetren, 2014). The need for embedding the results of implementation did emerge (Lindquist & Wanna, 2015; Howlett, 2018). An additional factor in implementing reform is managing the organisational changes (van der Voet, 2014), though this was absent in an assessment of the future of implementation research (Saetren & Hupe, 2018) and is yet to be assimilated into general implementation theory. Despite the changes of management reform taking decades to show success (e.g. Hood & Dixon, 2015; Pollitt, 2013), the linking of organisational change over extended time is also lacking (Kunisch et al., 2017). The absence of a model of long-term, successful organisation change from reform is a defect in implementation research (Brunetto & Teo, 2018; O’Flynn, 2015). This incomplete model of long-term implementation success underpins a re-assessment of reform as change management (Shannon, 2017), potentially providing clearer analysis of how well reform changes are managed for successes or failures. As highlighted in “Why Reforms so Often Disappoint” (Aberbach & Christensen, 2014), ‘endure’ (over elapsed time) can be different than ‘embed’ (a permanent change of practice). The questions above reflect current gaps in implementation frameworks, where links are missing between initiating public sector management reform, the management of that reform change over extended time and demonstrating that any success endured and became embedded. These gaps in comprehensive theory start with the design of management reform and its implementation from the top.

APS management reform starts from the top: Government and its agent an APS Secretary. This research challenges the assumption an APS Secretary can then implement embedded management reform. The case study is the implementation of program evaluation in the 1980s ‘Managing for Results’ (MfR) reform, compared with implementing the PGPA Act 2013. Illustrating the reform series (Barrett, 2014; Hood & Lodge, 2007; Pollitt, 2013), each requires demonstration of the non-financial results of APS programs. Program evaluation was required in the former and is being recommended in the latter. Between them is the factor of “embeddedness” examined in this chapter.

This chapter is structured in the following way. The first section provides context to the interview answers, with background about the APS management reform series between 1983 and 2013. The second explores responses to the major question of ‘embeddedness’. The third examines how those management reform changes were managed over extended time. The fourth reviews how APS reform was initiated. The fifth examines the place and influence of Page | 123

identified change agents in managing those APS reforms. The sixth examines the role of APS Secretaries as change agents. The seventh compares their responsiveness to Government with long-term change management. The eight examines the influence of devolved management responsibilities on uniform reform. The ninth introduces the place of time implementation. The tenth contrasts these findings from practice with implementation theory. The last two begin to develop insights into enhancing implementation theory, from applying a lens of extended time over the implementations of APS management reforms between 1983 and 2013. These insights are examined in detail in Chapter 7, in the following context of APS management reform series between 1983 and 2013. This series display repeated and similar objectives.

6.2 Context: the APS Management Reform Series These reforms between 1983 and 2013 had similarities, which are summarised in Figure 13. Figure 13. Common Factors in Initiating APS Management Reforms: 1983 - 2013

No./Date Origin of Reform Similar Objectives

1. 1976 Coombs Royal Commission Desirable existing programs meet government objectives.

Financial Management 2. 1983-90 APS to manage for results (MfR) and program outcomes. Improvement Program (FMIP)

3. 1992 FMIP Evaluation Concluded evaluation culture secured in APS management.

4. 1996 Commission of Audit APS requires cultural change – more focus on outcomes.

5. 2003 Cabinet Implementation Unit Sec PM&C - cetral requirements to show implementation.

6. 2008 Operation Sunlight Finance Minister initiative = improve evaluation of results.

7. 2010 Ahead of the Game Government = APS must become outcomes-based.

8. 2013 PGPA Act APS must measure performance in achieving purposes.

Source: Adapted from Figure 3 of Chapter 2. These are repeated reforms of the APS to demonstrate the outcomes which achieve its purposes.

Its purposes are to be “efficient and effective in serving the Government, the Parliament and the Australian public” (PSAc). Those management reforms were intended to create policies and procedures to enable this demonstration, by using information on outcomes from evaluation and by effective reform implementation. As explained in the methodology in Chapter 5, the case study is based on the 1980s management reform introducing program Page | 124

evaluation, which was regarded as “a crucial element of the system of managing for results and has a key role in program implementation and policy development” (Keating & Holmes, 1990: p.174). More succinctly, evaluation is where “the policy cycle ends – and restarts” (Althaus, et al., 2007: p.179). Program evaluation is a management tool for information on implementation outcomes and a means of demonstrating policy effectiveness (Bourgeois & Cousins, 2013). When viewed over the long-term of three decades, the initiatives and their common objectives (Figure 10) demonstrate that, despite ongoing efforts, those management reforms did not result in long-term change. The apparent absence of embedded management reform provides the context to the interviews examined in this chapter.

6.3 Embedding Management Reform This section explores responses to the factor of ‘embedded’. This was derived from the conclusion in the ‘managing for results’ evaluation that program evaluation was embedded in APS practice (TFMI, 1992). The need to embed reform has been a consistent refrain in APS practice (e.g. Podger, 2004; Ryan et al., 2008; Sedgwick, 2011b; ‘t Hart, 2011), although it is now acknowledged this has not occurred in Australia (Finance, 2014) or more generally (Hood & Dixon, 2016; Pollitt, 2013). ‘Embedded’ gained a systemic application, by performance management systems being regularly used and reviewed for their relevance in maintaining reform momentum (Newcomer & Caudle, 2011). In their recent review of the PGPA Act’s implementation, Alexander & Thodey (2018) only noted that individual Secretaries need to embed the Act’s requirements and did not provide any framework for doing this. The outcome of ‘embedding’ (reform) remains to be operationalised in APS-wide implementation practice.

In the APS, there is a dichotomy in evaluation practice. It can be either a specialised function separate from operational managers, or a standard practice by senior management (Mackay, 2011). One purpose of the interviews was to assess this dichotomy, as a test of ‘embedded’ in agency-wide practice. Interviewees were specifically questioned on the place of evaluation in these reforms. Responses summarised in Figure 14 were a mixture of the technical (conducting program evaluation) and the general (the outcome of demonstrating agency performance). Page | 125

Figure 14 Embedding Reform and Evaluation Times Recorded APS - culture and roles 156 Performance - including evaluation 133 Reform - Embedding 80 FMIP or managing for results (specifically) 30 Reform - program logic 23 Reform - evaluation or lack of 21 Policy - versus delivery 13 Annual Reports 9 Performance - Annual Reports 8 An APS performance culture links with the practice of evaluation. This was evidenced in the second-highest code being ‘Performance - including evaluation’, although the ‘managing for results’ culture was not widely recalled by respondents. Their answers reflected both the use of formal program evaluation and the separate accountability assessments of program performance through agencies’ Annual Reports (Milazzo, 1992). These Reports on agencies’ annual performance and their accountability to Parliament (ANAO, 2003) are overall self- evaluations of the outcomes from agencies’ performance. While the value of evaluation was recognised in theory, in practice its use fell away, partially through management resistance.

There were several reasons why the practice of evaluation in the APS was resisted. This resistance was not initially identified as an open code from interviewee responses, but became apparent by abstraction from considered and closer analysis of the interview texts. First, its results challenged the status quo, and people were likely to try to avoid it: “I think the difficulty with embedding a strong evaluation approach is that sometimes people don’t like the answers to evaluations, and that’s kind of almost a perennial problem. So I think that for evaluation to become embedded, you almost need a strong third party independent of agencies that pokes their nose in” (Consultant/former SES officer #2/ANAO official). That single interviewee provided an important analytical context of multiple perspectives on APS management reform, being a current consultant in the private sector, a former APS senior manager SES officer, who had also formerly been an evaluative ANAO official. That interviewee represented an example of the insider/outsider involvement in APS reform, whose advantages had been summarised in chapter 5. This observation supported an external proposal raised several months previously, of an independent, statutory Evaluator-General similar to the Auditor-General (Gruen, 2016). That respondent did not differentiate those ‘people’ between management and operational staff.

Second was a view that expectations about outcomes made evaluation hard to adopt: “In the implementation plans that we worked with the departments to develop an implementation plan, Page | 126

was always about benefits, realisation, absolutely. And so many people found that impossible” (former SES officer #7). A third issue was the level of complexity. Some reflected that it had been too difficult to understand and comply: “You talk to them about evaluation concepts or principles or even practice, in black and white, literatures, “this is an example. It’s not the only example, there are lots of other things, but this is an example of what we can do”. “No, no, we don’t need to do that”. In fact, I was told at one stage that “we’re trying to de-academify… take the academic out of evaluation, so that it can be more useable” (serving EL officer #1). This resistance to the adoption of evaluation reflects the vulnerability of a reform to individual staff acceptance and a lack of embedded use of evaluation by management as standard practice.

In practice, one implementation of performance reform did involve evaluation. Evaluation’s program logic featured in the 2011 National Partnership Agreements (NPAs) between the Commonwealth, State and Territory Governments (Council on Federal Financial Relations, 2011), as a joined-up-government framework (Harwood & Phillimore, 2012). One interviewee regarded NPAs as a reform success: “So some people would say they didn’t like national partnerships for various reasons but given that your topic is around performance measurement and reform, they’re one of the few examples, in my mind, where a lot of thought went in at the beginning to define what success looked like, and to actually measure it. And I think it’s because the money was being given to somebody else” (consultant/former SES officer #2/ANAO official). It is notable that this success stemmed from financial agreements between governments, rather than within-agency program performance through evaluation as asserted in 1992. The awareness of this reform success was limited to a single respondent.

There were few comments on the demise of evaluation as part of the ‘managing for results’ reforms. As a formal APS requirement, evaluation was abolished in 1997 (Mackay, 2011). A former APS Secretary saw the implications, especially for the resulting corporate amnesia (Wettenhall, 2011): “So it was seen as red tape, to use the modern terminology, and so it got dropped but I think that contributed to people forgetting about the importance of evaluations” (former APS Secretary #1). There was also a gap, between accepting evaluation in principle but then (not) using its results in on-going agency management: “there was a bit of rote form filling and ignoring of the evaluations at the time” (consultant/former APS SES officer #6). A further gap was between the policy centre and individual implementing APS departments: “the senior people in Departments and their Ministers don’t see evaluation the way the Department of Finance sees it” (former APS Secretary #4). Given the resistance to practising the reform Page | 127

of evaluation, this study also examined whether any other frameworks could be identified for implementing management reform and managing any associated long-term APS change.

6.4 Managing Change in the APS This section examines the management of APS reforms between 1983 and 2013. The following analysis looked for the utilisation of any possible change management frameworks where change was designed to be made permanent, or institutionalised (e.g. Armenakis et al., 2000; Fernandez & Rainey, 2006; Kotter, 1995). Figure 15 summarises the limited references to change management discussed by respondents.

Figure 15 Change Management Times Recorded Performance - framework 82 Reform - Systemic 33 Reform - Agenda 30 Research - impact of 29 Policy - strategic use in APS 15

While there were many references to the generic ‘performance framework’, none was linked with the factor of change management. Instead, major emphasis by interviewees was placed on the commencement of management reform by agency Secretaries.

This was due to two factors: APS responsiveness to government and top-down management. As former APS Secretary #3 highlighted, the first factor was Secretaries’ responsiveness to the Government: “an alignment of interests between the government of the day irrespective of the party of the day, and the senior leadership to public service driving an agenda reform within the public service that was seen to be in the interests of both”. The second was a professional (and top-down) responsibility for management: “you accept that part of your role as the steward of that institution is to embed those reforms that are to the long-term benefit of the institution”. This embedding required delegation: “to change the culture of 150,000 people, you’re really dependent on the next echelon, your colleagues (former APS Secretary #4). An external participant agreed change started at the top: “So if you don’t get a big push from the head of PM&C and the Public Service Commission, then basically what will happen is that this requirement will be honoured, in an essentially euphemistic and self-serving way” (consultant, non-APS #1). These key points identified the gap between the role of Secretaries in initiating reform changes and any agreed framework for implementing and managing those changes.

This gap contrasted with an earlier policy framework for changing APS policy and practice. The Management Advisory Board (MAB) issued nineteen Guidelines on APS policies between Page | 128

1991 and 1997 (NLA, 2017). Those were intended to influence senior management and all APS staff: “the Board wants managers to share the experiences of others so that they develop their own knowledge and expertise. Public servants should obtain insights to help with managing change in their organisation” (MAB, 1993: p.iii). A former SES officer confirmed the distribution of these MAB Guidelines and their intended, uniform, policy influence on APS managers. Their actual influence, however, varied between APS agencies.

This variation was outlined by the former manager of their distribution by Finance. He noted: “The MAB/MIAC publications were distributed to every SES officer, using the Public Service Commission address list for SES...The penetration below SES was, like most things in a diverse and devolved APS, mixed. In some agencies, the publications were used to drive change and distributed widely including to staff lower down. Some good examples included DPIE, Education and Employment and the Australian Taxation Office. In others, they were put on shelves and ignored” (consultant/former APS SES officer #6). Although the SES recipients are a small proportion of all staff, being 1.2% in 1984 (1,651: APSC, 2009) and currently 1.7% (2,596: APSC, 2018d), they are the managers supporting Secretaries, as “managing for results gave the SES clear responsibility for their agency’s performance” (APSC, 2009). This links those SES managers with managing the changes in agency performance undertaken by APS staff and ensuring the uniform practices of those MAB policy Guidelines.

The importance of this management responsibility was illustrated after the 1980s MfR reform. The MAB acknowledged this reform required change in APS culture: “there is, however, an urgent need to press home the changes and embed them more firmly in the working culture of the Public Service” (MAB, 1993: p.v). The significance of individual Secretaries in that embedding was shown in this caveat: “much, however, will depend on the heads of Public Service agencies and their managers” (MAB, 1993: p.v). A former Secretary (#3) of that time agreed those MAB publications had limited impacts on APS practice, because of the discretion of Secretaries: “Often the final approval process for a MAB document was about making sure no Secretary was actually required to do anything too specific, since each was an autonomous monarch in his or her own realm”. This demonstrated the tensions between implementing uniform APS policy, or reform, and its actual take-up in practice by an individual Secretary. Any uniformity became more difficult by the devolution of central responsibilities to individual Secretaries in the 1999 Public Service Act (Kemp, 1999) and alternative means of embedding Page | 129

reform changes uniformly in the APS were explored in the interviews. This was done by introducing the next factor of extended time for managing and achieving embedded reform.

6.5 Managing APS Reform over Extended Time for Embedded Change This analysis introduces change over time and its relationship to achieving ‘embedding’. The objective is to connect the extended time observable in evaluating the effectiveness of (embedded) public sector reform, such as thirty years (Peters & Pierre, 2017), or forty years (Pollitt, 2013). The means of implementing this embedded change were explored through the question: “what practices of embedding a reform in the APS have you read/written/practised?” Responses from current and former APS staff were wide-ranging, with timing and structural limitations being identified in practice and theory.

The extended time to achieve change was noted. A public administration professor (#1) concluded: “A change, a big change in culture, a big change in a policy program probably needs five to ten years of consistent pushing at something, not announcing something and moving on”. The challenges to making systemic change were identified by former APS Secretary #1: “To embed the values you need three different key elements. You need leadership…They also train people. … The management includes everything you do in management is consistent and supports the values, so what structures you have, your decision-making structures, your training programs, all those things, have got to be consistent with what the values are. So if you’re talking about the way you do things with Ministers what are the structures that give you assurance that your policy advising reflects what you said the values are. And the third one was assurance. How can I be sure that actually the values are being implemented around here?”. This identifies the uncertain influence of a Secretary to change agency practice.

The Secretary’s influence as a change agent could be limited by layers of staff. A former SES officer (#3) who is also a consultant to the APS noted: “In the end, to implement change, you’ve always got to make sure people at the bottom and right through the organisation understand exactly what it is and why it’s worthwhile doing and if they don’t, it can be easily derailed by other things and being too busy with other things and people just finding it all too hard”. Even with the major reform of the central CIU (Wanna, 2006), this change was not realised: “you’ve got establishing the Cabinet Implementation Unit in 2003. Really wasn’t on my radar and I was quite surprised to find maybe four to five years ago, they had an interest in evaluation, ‘cause I didn’t realise that, so that had missed me. I was busy I guess, but whatever technique Page | 130

that was being used to exert some kind of influence from that interest in implementation and evaluation as a part of it. It missed me entirely” (Current EL officer #2). This problematic influence of the Secretary extended to regional staff.

The dispersion of APS regional staff is a factor currently implicit in implementing reform. This challenges the embedding of uniform change in both Canberra-based and regional staff. A gap in the Secretary’s impact was evident here: “at the Department of Education where it’s regionalised and you’ve got somebody who’s a Regional Manager, but he’s not on the message. He’s not going to be implementing the culture change that the Secretary wants and I’ve interviewed that Secretary and he says “what can I do?” (consultant and former EL officer #4). A former Canberra-based SES officer (#4) connected reform and extended time: “the change is really gradual, and you almost need to repeat things three or four times before it actually sticks”. Such repetition of reform change may be needed to connect with regional staff, who are most of the APS (62%: APSC, 2018a). This differentiates the commencement of top-down reform as a one-off implementation, from the subsequent management of that reform long enough for it to stick, or become embedded as enduring practice by all APS staff.

Two factors can be identified in managing embedded APS change. Reform may start at the top: “I’m sitting there producing good stuff that you hope will get to the leadership, but we never really… maybe somebody did, but I wasn’t being engaged in how do we get it down through the organisation” (Former APS Secretary #4). However, it needs to be managed throughout an APS organisation to influence all staff: “It’s part of this embedding issue. If you do not have your stakeholders onside, your staff onside, forget it” (former APS Secretary #1). There was only one reference to embedding reform by legislation: “So, yeah, there’s no doubt that if you can get something into the A.P.S. Act, or if you can get something into kind of a hard wired process of government, you’ve got a very good chance of making it last” (Current APS SES officer #3). This was surprising, since one question had asked about the PGPA Act and its nationally-uniform statements of APS performance. In relation to sustaining change, there was a significant clue from a terminological shift in the meaning of ‘implemented’.

A difference was offered between ‘implementation’ and its later change to a ‘program’. Former APS Secretary#1 differentiated: “what was clear to me in (named Department) was to use a project management approach to this. You had to get your mindset away from program management, that is, an ongoing activity, to think of it in terms of something that has a Page | 131

beginning, a middle and an end, and the end point was the implementation is now settled and we can treat it as a program”. This separated beginning a reform and its staged implementation, from later delivering its policy objectives as a program. This identifies a practice-based distinction in time, by a wider difference between initiating a reform as a one-off project and its later re-designation as a long-term program. That suggests differences are perceived by senior managers in meaning and timing. A reform may be started and its implementation be completed, although different from the achievement of those reform objectives as a program of long-term change. This is a difference in timing, between commencement and achievement.

Implementing and achieving change has been slowed by the devolved and dispersed APS. This devolution of management responsibilities was intended (Halligan, 2013; Mulgan, 2010; Stewart & Kimber, 1996), but the negative consequences for reforming the single APS have only recently been examined (Halligan, 2018; Podger, 2018). Respondents did recognise the need to embed reforms: first as an ideal: “A change, a big change in culture, a big change in a policy program probably needs five to ten years of consistent pushing at something, not announcing something and moving on” (Professor of public administration #1). Second from APS practice: “In more recent years, the challenge has been greater in the so-called "joined - up" environments, involving participation by the private for profit and not for profit organisations, other levels of government and across-agency cooperation. These aspects have made the reform path that much more difficult” (Former SES officer #1/senior ANAO official). The difficulties in achieving uniform reform in this devolved APS were frankly acknowledged: “And the thing which is deeply, deeply, deeply embedded in the psyche of the public service is that we are now a whole series of fiefdoms that agencies, Agency Heads, have a deep cultural sense of responsibility bordering on entitlement to make decisions within their own environment” (Former APS Secretary #3). Timing difficulties were also perceived: “We don't know how long it'll take to embed the reforms [the PGPA Act] in the system” (current SES officer#1). These comments identify implementation tensions between the time required for effective change and the devolved autonomy of individual Secretaries.

The management priorities of those Secretaries may not coincide with central guidance. Two factors were identified in achieving uniform management change in the APS: the influence of the APS centre on individual agencies and the challenges of changing a large and dispersed APS of 150,000 staff. A participant in past APS reforms concluded that “one significant feature of the Australian performance regimes is that the centre of government has invested relatively Page | 132

little in building effective performance structures across entities” (Hawke35, 2016: p.50). The APS centre consists of three core Departments of PM&C, the Public Service Commissioner and Finance (Halligan, 2007) and the first challenge is the APS-wide effectiveness of that centre’s policy and implementation roles. The PGPA Act 2013 was considered to be a structure for embedding central reform: “So when you think about embedding reform, it may be the case of evaluation in the Australian government, of which the culmination of the PGPA Act is a real stellar opportunity to see change occur” (academic in evaluation). The Department of Finance has since issued nine Resource Management Guides on APS performance management under the PGPA Act (Finance, 2017c). Similar to the MAB Guidelines, those Guides may be one means of re-instituting a central influence on uniform implementation of management reform.

The second challenge to uniform reform is changing APS practice in a large, dispersed entity. Participants are both Secretaries and those layers of their staff down to street-level and in its regions. Those layers were exemplified by former APS Secretary #4: “I think what I thought was trying to engage the senior echelons with it and then hope they’d take it down. I was probably too hands-off in management, more sort of giving them a message of what I expected of them and also thinking about tools that enabled them to do that and we didn’t do that as efficiently, but I still think at the end of the day, the reason why it hasn’t been embedded is because people didn’t want to do it after I left”. This links the importance of trust between a Secretary and other agency managers, with the lack of means of establishing change in practice and assessing its success.

No interviewee identified a framework for assessing the success of management reform. This represents a gap between practice and implementation research, as there are at least two frameworks for establishing policy or reform success. Marsh & McConnell (2010) propose that success in implementation is achieving the intended outcomes in short, medium and long- term timeframes. This can be linked with the five-stage policy cycle of “agenda setting, policy formulation, decision-making, policy implementation and policy evaluation” developed by Howlett & Cashore (2014: p.23). In the short-term of ten years, the Managing for Results reform followed these five stages: the agenda setting by the Coombs Royal Commission (Coombs, 1977), government policy formulation (Dawkins, 1983), implementation (Keating, 1990) and policy evaluation (TFMI, 1992). Expanding reform implementation into five stages

35 Lewis Hawke was an SES manager in the Finance Department between 2003 and 2007 (Hawke, 2007). Page | 133

establishes links between the reform objectives and evaluating their achievements. It is surprising that neither of those frameworks identifies the actors and organisational levels to make those stages happen. The place of change agents in implementing management reform is considered next.

6.6 Change Agents in Implementing APS Management Reform This section examines the place and influence of change agents managing those APS reforms. Several were identified during the interviews, establishing a link between implementing reform at the top and those who carry out the reform changes. Change starts at the top, through both government and their implementing agents: APS Secretaries. They have dual policy and management responsibilities: for implementing government decisions (Podger, 2009) and “embedding values in their organisations in practical ways” (Podger, 2009: p.160). As agency heads, Secretaries are significant agents for initiating change (Grube, 2011; Podger, 2009; Stewart & Kringas, 2003), although they are expected to act jointly. This is undertaken through their top-down policy and management steering groups, successively named the Management Advisory Board (MAB, 1993), Management Advisory Committee (e.g. MAC, 2010) and now the Secretaries Board (AGRAGA, 2010). The Board is responsible “for the stewardship of the APS and for developing and implementing strategies to improve the APS”36. The Secretaries Board “sets the overall direction for the APS, drives collaboration, prioritises collective resource use to achieve cross-boundary solutions and gives priority to the creation and maintenance of a One-APS shared culture” (APSC, 2014a: p.68). First identified in 2005 (MAC, 2005), the priority of a single, one-APS shared culture also highlights the importance of implementing management reform uniformly across the culture of that APS.

The interviews drew on this culture, by the experiences of six former senior APS officials. This included their relevant responsibilities for APS management reform. These experiences derived from their service in the SES (the senior management group below Secretaries) and their later management positions as senior Secretaries (such as Finance and Prime Minister’s). These experiences were noticeable in the fifth-highest codes being both ‘reform – change agents’ and ‘Secretaries’, coupled with “Government”, as summarised in Figure 16.

36 Public Service Act, Section 64(3)(a). At https://www.legislation.gov.au/Details/C2019C00057 Page | 134

Figure 16 Change Agents Times Recorded Government - influence on reform 71 Reform - change agents 71 Management – Secretaries, including 71 existing and later commentaries Performance 61 - central agencies' influence Management - SES 55 Ministers - policy advice to 49 or directions from Reform - basis of 42 ANAO Audits = Efficiency; 35 Performance; Best Practice Guides Parliament 33 Ministers - Secretary relationships 32 Reform – series 25 Government - responsiveness 23 Prime Ministers - influence 20 Reform – developments, 19 internal or progressive

This identified the presumptive top-down influence of both government and Secretaries, with that of individual government Ministers having some further but lesser impact. Together, they are the source of top-down reform change by central agencies, through their initial implementation agents of both individual Secretaries and the Secretaries Board.

These joint roles were theoretically clear. Evident was the Board’s collective role: “Well firstly the Secretaries Board was established to guide reform, take collective responsibility for reform. The Public Service Commissioner, secondly, was tasked to do specific things under all those recommendations to progress them” (former Secretary #2). Those former Secretaries were clear about the Board’s notional role. Interviewees identified that Board members required management skills and long-term vision, to implement APS reform, but no evidence was provided to support the Board’s effectiveness in that role. As recently identified by the APS Review (APS Review, 2019; Turnbull, 2018), there is limited awareness of the Board. There remains a gap between the Board’s management role and any performance results in implementing reform. A former Board member outlined the separate role of a Secretary: “you accept that you are a leader of an institution and not just a manager of tasks, then you accept that part of your role [as a Secretary] as the steward of that institution is to embed those reforms that are to the long-term benefit of the institution” (former Secretary #3). This identifies three factors in reform: management stewardship; embedding change; outcomes over the long-term. It highlights the significant role of an individual Secretary in implementing collective policies. Page | 135

Government policies have been the source of APS management reforms. This study of ‘managing for results’ was Government policy (Wilenski, 1986) devolving management from the centre (Finance Department) to Secretaries (Podger, 2018). MfR devolved controls on program outcomes, finance and staff to Secretaries, who then became responsible for achieving agency outcomes. This contrasted with the traditional Secretary’s priority of providing policy advice (Ives, 1995b), which was separate from its later implementation (Halligan, 1995). Because “implementation has often been the neglected end of the policy spectrum” (Halligan, 2007: p.228), this resulted in differing management priorities between providing policy advice and its later practice. Incomplete policy implementation highlights the need for ongoing systems and managers of change and its outcomes, to implement reform over time. A finding from this study is the distinction in implementation roles: between Secretaries as policy advisors and change initiators, and those later change managers needed to embed management reform. Secretaries are necessary but not sufficient for embedding management reform.

6.7 APS Secretaries as Change Agents: Necessary but not Sufficient Starting management reform at the top is not sufficient to ensure that change endures over time. Former Secretaries revealed that their policy and reform priorities were influenced by changes in Government, affecting the maintenance of long-term reform momentum (Barker et al., 2018). This contrasts with the extended time (Pettigrew, 1990) required to anchor change in an agency’s culture (Kotter, 1995). The impacts of changes in Governments were identified by a long-time researcher of the APS (professor of public administration #2): “this was a very distinct example of change with a change of government or a change of Prime Minister I should say, in that particular case so, you could say that knocked the stuffing out of the implementation in that particular case, but what we have learned over time is that much of the new capacity which or different types of capacity which was acquired in the 1980’s got lost along the way and one of the interesting questions is “why in hell did that get lost and how did you retrieve it?”. There is a disjunction in implementing management reform: between initiating it and the momentum required for sustaining it.

This disjunction was repeatedly observed. Reform momentum is interrupted by a short-term attention span which may be in both Governments and their individual Ministers: “Because where is the discipline, it’s not going to happen from inside. And if the Ministers are not concerned and they’re coming and going and of course as we know that’s another issue with governments, it’s not just the issue of governments having short periods of time in government, Page | 136

it’s Ministers now that are constantly turning over” (former SES officer #1 and senior ANAO official). The necessity of sustained central attention to reform was evident: “So one fundamental point about what sustains a reform or what sustains something like evaluation is someone’s got to be interested, someone’s got to pay attention. And that has to be either Ministers or parliament or conceivably a central agency” (consultant/former SES officer #6). The distraction of short-termism (Podger, 2018) challenges current APS management: “and some of the things that government want to do are often incredibly complex, often done in incredibly short periods of time” (current SES officer #3). These become important timing differences, between the short-term priority of Secretaries’ responsiveness to Government and the long-term implementation priority needed to sustain the momentum of a reform.

Two reform momentum factors were identified. These were (1) change at the top (Government or a Minister) affects the short-term attention to implementing a management reform and (2) the vulnerability of a reform continuing, after such a change in government. In 1996, the Federal Government changed, resulting in the evaluation reform not continuing (Mackay, 2011). An absence of embedded outcomes from that evaluation reform was observed: “I think that reform didn’t really progress because the resourcing required to actually embed it ended up not being there” (current APS EL officer #2). A former SES officer during that MfR period reflected on both the attention spans of APS management and the reform series: “So why the evaluation framework as a formal requirement is dropped37, there were two reasons, one was there was a bit of rote form filling and ignoring of the evaluations at the time. The second was a feeling among a number of Ministers that they had this kind of sudden rush of blood to the head about outsourcing that if every activity of the Commonwealth public service could be put to competitive tendering it didn’t need evaluation anyway because you’d sort it out through competitive tenders” (Consultant/former APS SES Officer #6). This ignored the policy in the Guidelines on Contracting for the Provision of Services in Commonwealth Agencies, that “services are contracted out only when there is a valid financial advantage created for the Commonwealth” (MAB, 1992: p.33). It also reflected a lack of long-term attention to embedding reform, which became apparent in other interviewee concerns for current APS capabilities.

37 In 1997 (Mackay, 2011). Page | 137

These concerns were about APS management skills. Some were generalised: “I perceived that the APS was rather behind the pace on a broader approach to public policy and a contemporary approach to public sector management” (former APS Secretary #2). A further factor of time was combined with staff movement: “People turn over too quickly. They’re not encouraged to stay put and even if they tend to stay in one segment of the policy space, they move around” (current APS EL officer #2). Skills in using performance information are absent: “one of the difficulties about performance information is that there is actually no training in it as far as I can see. You’ve got a lot of people who aren’t numerate for a start in the Public Service” (Consultant/former APS SES Officer #3). An evaluation consultant noted this led to risk aversive management: “They get their heads cut off if things go wrong and that’s what I mean they’d rather just keep continue to manage by doing lots of activities, than measuring how they’re going” (Consultant, non-APS #2). These comments highlight implementation factors of both management attention spans and staff skills, but both observed in the negative.

Another negative influence was (over) responsiveness to the changing priorities of government. This was identified by both a current senior manager: “an issue within the public service of public servants doing what ministers want them to do without engaging in robust debate about what’s the best outcome” (current APS SES officer #2) and by a former Secretary (#3): “the responsiveness has now become to the point where you have a passive reactive service that is so heavily tasked oriented that it can’t think beyond Christmas, let alone the end of the week”. Two comments were explicit: “The reason we stopped using the phrase Ahead of the Game [AGRAGA, 2010] is because the government of the day didn’t use the phrase Ahead of the Game” (Former APS Secretary #2). This short-termism was typified by the comments of professor of public administration (#1): “I interviewed one former Secretary, a very senior Secretary who said ‘When I shave in the mirror in the morning I don’t know if I’ll have a job at the end of the day’. There was precedent for this concern, from 1999.

In 1999, was sacked as Defence Secretary without notice. The Federal Court subsequently confirmed that Secretaries have no right of tenure when they have lost the trust of a Minister (Weller, 2001, chapter 9). The short-termism identified by Podger (2018) has a basis in past practice, with negative consequences for those Secretaries as long-term reform implementers. The short-term (of responsiveness to government) can be contrasted with the long-term (decades needed to assess the impact of changes in management reforms). The responsiveness of those Secretaries to Ministers with short-term priorities (Mulgan, 2008) Page | 138 contrasts with the extended timing of decades to implement and embed reform. This identifies a difference over time between the role of a senior change initiator and the function of later change manager for embedding systemic reform.

6.8 Short-Term Responsiveness versus Long-Term Change Management Long-term change management in policy implementation is under-developed in APS practice. Despite being updated from 2006 to 2014, the central ANAO Better Practice Guide to the Successful Implementation of Policy Initiatives makes only one reference to change management: “it is important not underestimate the importance of change management” (ANAO, 2014b: p.49). That Guide contains neither a framework on change management nor on using change agents. The practice of top down change management by Secretaries is examined next, in the context of their dual roles.

Secretaries have dual roles: responsive to government and APS leaders. The consequences of being overly-responsive to government can be contrasted with the long-term stewardship roles of Secretaries acting in their own dominions. The priority of responsiveness to Government was a feature of the original 1976 Royal Commission (RCAGA, 1976): “Another theme that was picked up from Coombs was more responsiveness to the elected government, so the elected government was the one that set out what the results were to be achieved, and we would be more efficient in achieving them, but that responsiveness agenda was an important part of the reforms in those early days” (former APS Secretary #1). This responsiveness was tempered by the priority of better APS performance: “whereas a lot of the reforms that we’re talking about were done with the public service because we had a leadership in the public service that wanted the public service to perform better and that coincided with the government’s aim of wanting the public service to perform better implementation” (consultant/former APS SES officer #6). This revealed APS management balancing between the short and the long-term: between perceived government priorities and demonstrating APS agency performance.

This balancing was challenging to the Secretary’s long-term responsibilities. A former APS Secretary (#3) with fifteen years of responsibilities at that level noted: “responsiveness to the agenda of the government of the day is kind of where we should be but the problem is that ministers are so focused on their agenda that they ignore, and the incentive structure the Secretaries face ignores the rest of the job of a Secretary. So the job of the Secretary is not just to be a manager of the business of the government of the day, it’s also to be the steward of an Page | 139

enduring institution”. There is an implementation tension, as that Secretary’s stewardship depends on extended time and the occupant of that office influences the different practices of that stewardship. These differences were noted: “a lot of this will depend upon Secretaries, the way Secretaries think and operate. There have been different Heads of PM&C over the last 20 years, I’m sure all of them think differently around certain things, and of course they’re going to manage their department and manage the Public Service differently in the light of that” (current APS SES officer #3). The place of extended time was evident in the comments about changes in both heads of the Prime Minister’s Department (PM&C) and Secretaries generally.

Past Prime Ministers, or Ministers, or influential Secretaries had personal impacts on reform. A PM&C Secretary named in the interviews illustrates the influence of this position as Head of the APS (Mulgan, 1998). As the-then Secretary of PM&C between 2003 and 2008 (Figure 19, Chapter 7), Dr , was highly-commended for his 2003 initiative in creating the Cabinet Implementation Unit - CIU (Wanna, 2006) and its impact on the implementation of Cabinet decisions throughout the APS. This exemplified structurally embedding the requirement of the centre (PM&C) to monitor the implementation of Cabinet decisions. Interview data revealed this influence was not limited to the Secretary of PM&C as APS head.

One Secretary who initially was not the APS head was consistently nominated. In his role as Secretary of the Finance Department between 1986 and 1991 (Peters et al., 2000), there were many references in the interviews to Dr Michael Keating’s positive impacts on implementing and promoting the 1980s reforms. In the example of this case study of program evaluation, this influence included a central Evaluation Branch and sponsoring best practice across the APS through the Canberra Evaluation Forum (Mackay, 1994). Dr Keating was later promoted to PM&C Secretary (1991-1996: Figure 19, Chapter 7), where he was known to keep his own counsel and actively promote evaluation (Di Francesco, 2000; Podger, 2007). His collective tenure of ten years at the top in two portfolios highlights the place of extended time (in this case, a decade), in the influence of one APS Secretary as a change agent.

A negative contrast was the present short-term tenure of Secretaries. Some named Secretaries were interested in management reform (e.g. Keating at Finance/PM&C; later Moran at PM&C: Figure 19, Chapter 7), but others were not. Those Secretaries’ varying priorities can also be contrasted by four examples from recent APS practice. In the former Immigration Department, the-then Secretary introduced an evaluation culture through his Deputy Secretary (Southern, Page | 140

2014). At the SES 3 level, this was a significant priority in management attention, which was identified by one interviewee as being abandoned when that Secretary moved to the Health Department in 2014. The Industry Department’s evaluation responsibilities are managed at the senior and influential level of an SES 2 (Industry, 2015). The Infrastructure Department has a current Evaluation Strategy 2016-2021 (Infrastructure, 2016), but without such responsibility identified on the senior management chart (Infrastructure, 2019). Similarly, the Department of Social Services’ Chief Evaluation Officer is at the lower grade of Executive Level 2 and does not appear on that senior management chart (APSJobs, 2017; DSS, 2019). Between an SES 3 Deputy Secretary and an EL 2 Director (Figure 2, chapter 2), there are four significant differences in management ranks and consequent agency responsibilities for evaluation.

These differences were exacerbated when Secretaries changed jobs or retired. Interviewees repeatedly observed that Secretaries’ absences introduced a disruptive element in the long-term continuity of embedding reform and for any evaluation practice after those moves. There were no means identified by those interviewed for maintaining the momentum of management reform after any of these senior managers changed. The original presences of Secretaries had been necessary to initiate a reform, but not sufficient to sustain momentum over the extended times required to embed a reform throughout the devolved and dispersed APS.

6.9 Influences of APS Devolution on Uniform Implementation Initiating management reform does not result in embedded reform change. This is despite the responsibilities of a Secretary to manage an agency efficiently and effectively (PSAb), through three main roles: (1) responsiveness to Government and its policies; with responsibilities for (2) “the strategic direction of the APS as a whole and (3) the strategic direction of their agencies” (APSC, 2014b: p.68). Analysis shows a deeper tension from devolution of these management responsibilities to individual Secretaries: “the thing which is deeply, deeply, deeply embedded in the psyche of the public service is that we are now a whole series of fiefdoms that agencies. Agency Heads have a deep cultural sense of responsibility bordering on entitlement to make decisions within their own environment” (Former APS Secretary #3). Although this devolution was intended by the MfR reforms (Aucoin, 2012; Keating, 1995; Podger, 2009), Podger (2018) has now concluded that devolution went too far and fragmented the whole-of government administration, which is needed for public sector reform (Christensen & Lægreid, 2007). Devolving management responsibilities from the centre to Secretaries has created individual domains, contrasting with their corporate responsibilities (above) to the Page | 141

Secretaries Board. As noted above, a Secretary’s tenure may not correspond with the long- term implementation required to embed a management reform.

Embedding management reform changes can require decades (Bovaird & Russell, 2007; Kotter, 1995). Throughout the various interviews, there were many generic references to the position of “the Secretary” or “leaders”, with some reflection that the absences of those same senior personnel led to potential disruption of long-term reform outcomes. A former APS Secretary (#4) observed “If you’re in Health and you’re keen on evaluation and Finance is doing it and along comes somebody who couldn’t give a [expletive] about it, has never had a policy idea in their life, you get ignored”. This was echoed by one of the peers of the above former Secretary #4: “effective leaders are going to determine and effect the direction of the institution and if their take on cultural and renewal is the same as yours then you continue down the same path. If they’re off on a different one, well they’ll kill whatever” (former APS Secretary #3). This change of leaders meant the continuation of reforms could be disrupted.

This potential for disruption was noted. An experienced former APS Secretary (#3) concluded “when the people change or someone turns their back and you don’t have to comply anymore, well you don’t. So the secret to longevity in any of this reform process is you’ve got leaders who are leaders”. However, an individual change agent was not sufficient: “I think the challenge of having a change manager is that everyone thinks that that’s OK. When do you manage the change? Actually, everyone has to be, otherwise nothing’s happening” (current APS SES officer #2). The multi-layer levels of reform implementation and change can be observed in this comment: “But the governance arrangements have to be, and the management arrangements, need to be such that when the secretary necessarily moves on to deal with the matters of today and tomorrow, next week and the week after, then the senior executive structure needs to be able to maintain the focus on the things that are important” (former APS SES #8/senior ANAO official). An important conclusion is that the influence of Secretaries as leaders does not automatically result in long-term reform implementation within an agency. The implementation factor of time as decided by the centre is considered next.

6.10 Influences of Time on Implementation Central influence on agencies’ implementation of policy was re-asserted in 2003. Formed in PM&C, the Cabinet Implementation Unit (CIU) connected government’s Cabinet decisions with APS agencies’ implementation. Being at the APS centre of the Prime Minister’s Page | 142

Department, the CIU operated at the highest level of implementation influence and examination of associated time-lines. There was a variable meaning of ‘implemented’ assessed by officers of the CIU. This conclusion was from the following three comments by a serving SES officer (#3), with CIU experience: “We would also seek a closure report from the relevant agency which described what had been achieved and why ongoing monitoring was no longer required”. “The CIU would also talk to other departments if it was necessary to get their views on whether we should stop monitoring an initiative. We would then make a recommendation to the Prime Minister as part of our reporting process, seeking agreement to monitor new initiatives and agreement to cease monitoring initiatives that were considered to be implemented. These recommendations were usually agreed”. “It was always very important that the PM of the day decided what was or was not to be monitored”. The phrase “cease monitoring initiatives that were considered to be implemented” indicates there were no standard definitions.

This variability in defining ‘implemented’ reflected pragmatic APS practice. It also reflects similar research conclusions, where the meanings of ‘implemented’ have varied from seven years (Kotter, 1995), to thirty years (Hood & Dixon, 2015), to forty years (Pollitt, 2013). Another former APS EL officer (#1) with CIU experience advised: “It was certainly never the case that a department could unilaterally determine that a project had been (sufficiently) implemented, there had to be some basis for that claim in project documentation in terms of what had been planned and actually delivered”. There were tensions in the meanings of ‘implemented’, as this involved two-way assessments: between the administrative (the implementing department and central agency) and the political (Prime Minister). An important factor in finalising reform implementation was the application of time.

This difference in timing was important. As both following interviewees had direct experience of the CIU, their conclusions about timing are especially relevant. A serving SES officer (#3) noted “In terms of length of time, it was entirely dependent on the type of initiative. Some moved faster than others. Large initiatives could take several years”. This variability in timing was supported: “In short, each project was reported on in its own terms and timelines; there was no specified or average time frame, either projected at the beginning of a project or in practice. The reporting process went on as long as the Cabinet Submission, implementation plan and subsequent reports indicated that a project was being implemented” (former APS EL officer #1). In APS practice, there were no common definitions of ‘implemented’ and ‘achieved’. This illustrates the CIU’s variable timeframes in practice and identifies the place Page | 143

of extended time in implementation theory. The CIU’s abolition in 2015 (Gold, 2017) suggests real-time implementation practice returned to Secretaries and their devolved responsibilities. This can be contrasted with the difference identified between ‘implemented’ and ‘program’ (in section 6.5) and makes unified action by the one APS (AGRAGA, 2010) problematic.

The uniform impact of top-down initiatives on the single APS was not apparent. One central initiative was a single APS evaluation culture (Tune, 2010), where implementing one agency’s culture required an SES 3 Deputy Secretary (Southern, 2014). After that Deputy Secretary left, one interviewee concluded that agency’s evaluation culture had been diminished. The politics of devolution of management responsibilities from the APS centre and within individual agencies were examined in Section 6.8. The embedding of reform is influenced by the management level of change agents and their continuity in office, resulting in two insights. The first is that APS management reform follows the top down implementation path. The second identifies two discrete roles in effective change management. Change originates from the Secretary, but reform embedding needs a separate change manager involved in long-term implementation. These provide insights into enhancing current implementation theory.

6.11 Implementation Theory Absent in APS Practice Current implementation theory can be contrasted with these findings emerging from practice. Evaluation theory was rated lowly (twenty-three out of forty-three) and implementation theory did not feature significantly in interviewees’ discussion (ranked thirty-one of those forty-three open codes). The absence of these two frameworks, or any others, was recognised: “So in essence what happened then of course, the academic community was quite scathing in its criticism of these reforms, because the simple response was they had no theoretical underpinning” (former APS SES officer#1 and senior ANAO official). Indeed, implementing the ‘managing for results’ reform “was not an ideology, it was whatever policy outcomes the government was after and they might be to the left, it might be to the right, (Consultant/Former APS SES Officer #6). As well as not being based on theory, there was no implementation plan.

Former SES officials during that MfR era reflected on this absence. They identified a lack of detailed implementation objectives: “We talked about processes of reporting back to the centre ..on each of these projects, which were the new policy measures that were approved, and the risk management around that. …… But it wasn’t a sort of a carefully worked out theoretical arrangement” …….“It was one of these trajectories where at the beginning you didn’t know Page | 144

what it was going to look like at the end” (former APS Secretary #1 & Finance SES officer during MfR reforms); “there is no part of any story that I’ve just told you where I knew where we were going to end up” (former APS Secretary #3). Although this reform was designed to introduce a results-based performance culture in the APS (Keating & Holmes, 1990), there was a lack of a management plan for the implementation of that MfR reform. Any initial focus on implementation theory was absent.

This absence was clearest in the responses by two researchers in public administration. Professor of public administration (#1) reflected: “I’m in that camp I suppose with [named public administration academics], who are sort of doing a kind of informed commentary on what the public service is trying to do, what might be the strengths of what it’s trying to do, what might be the weaknesses of what it’s trying to do”. The second professor of public administration was more forthright about this gap: “They didn’t have a proper implementation program, of following through and working it through with Departments”. This was noted by former APS Secretary #1: “you addressed the issues as you saw them at the time and they led onto new agendas, and you look back on it and see it as a coherent whole, but the coherent whole wasn’t seen at the beginning”. This was supported more specifically by a former APS EL officer (#1), with experience in the CIU: “implementation frameworks or approaches or theories? No, none seriously applied”. From these data, theory-practice gaps begin to emerge. These gaps are in the links between implementation, change management and extended time.

6.12 Change Management in APS Practice is Short-Term Varied meanings of ‘implemented’ contrast with the lengthy timing for ‘embedded’. The terminology of embedding was introduced in question eight: “What practices of embedding a reform in the APS have you read/written/practised?” Structurally, this systemic embedding starts with the Secretaries Board (APSC, 2014b), the agency Secretary, then extends into the next three managerial strata of the SES and the next two Executive Levels (Figure 2, Chapter 2). Answers brought out the need to maintain continuity over time of the reform change, once it had been commenced. Broken links between change initiators, later managers and subordinate staff disrupted the required continuity of change.

Two of these breaks were identified by two former APS Secretaries during the MfR reform. One was his expectation that implementation would be continued by his subordinates: “I was probably too hands-off in management, more sort of giving them a message of what I expected Page | 145 of them and also thinking about tools that enabled them to do that and we didn’t do that as efficiently, but I still think at the end of the day, the reason why it hasn’t been embedded is because people didn’t want to do it after I left” (former Secretary #4). The second was the short-term focus on getting the job done: “The change management stuff is longer term stuff and if you’re heavily task oriented you don’t have time to embed change, you just get it done” (former APS Secretary #3). Simultaneous with evaluation being abolished in the APS in 1997, the change agent of the Finance Department’s Evaluation Branch was abolished (Mackay, 2011). The actions of a separate change manager did not ensure that reform continued.

Embedding reform change remains a challenge in the current APS. This was noticed by a current SES officer (#2): “and I think the challenge of having a change manager is that everyone thinks that that’s OK. When do you manage the change? Actually everyone has to be, otherwise nothing’s happening”. A change in personnel was disruptive: “when the individual champion left, things fell away again every time” (current EL officer #3). This was especially so at the middle management level of Director (EL2): “When new Directors come in, then a lot of what happens is they’re not interested in doing what the previous Director had done. They need to develop their own empire and there’s no point… .it’s not going to get you a promotion if you just keep doing what the previous guy did and there was a level of arrogance also in my view, in the people that came in. There was very little respect for what we had achieved previously and little respect for even the literature” (current EL officer #1). The continuity of implementation was disrupted by changes in this middle level.

Another disruptor was distance between the Secretary in Canberra and the dispersed regions. Sixty-two per cent of APS staff are located in those regions (APSC, 2018a), where the influence of a Secretary may not reach: “here’s the Department of Education where it’s regionalised and you’ve got somebody who’s a Regional Manager, but he’s not on the message. He’s not going to be implementing the culture change that the Secretary wants and I’ve interviewed that Secretary and he says “what can I do?” (Consultant/former EL officer #4). This is a broken link in implementing longer-term change management. There are successive functions for the actors who may be necessary to achieve permanent changes of APS practice.

Permanent change in the APS may not be achieved by the initiators of management reform. The data revealed such initiating individuals can change their roles or the Government may change, leaving the continuation of those reforms uncertain. This is so if the changes may not Page | 146

yet have been taken up uniformly by all staff, supporting conclusions about the incomplete implementation of the MfR reforms across the APS (TFMI, 1992). This complements previous interviewee comments about (over) responsiveness to government influencing short-term senior APS priorities and disrupting the longer-term actions required to embed a management reform. This perspective of extended time can be applied to existing implementation theory.

6.13 Applying a Lens of Extended Time to Implementation Implementation frameworks may be enhanced by a factor of extended time. Between twenty years (Bovaird & Russell, 2007) to thirty years (Craft & Halligan, 2017) can be required for reform to become embedded and the interviews were designed to uncover any such longer- term perspectives or implementation frameworks employed in APS practice. It was clear that timing and its connections with performance-related reform formed some background in respondents’ answers, as summarised in Figure 17.

Figure 17 Extended Time in Theory Times Recorded Reform - implementation 64 in terminology Performance - long term 64 importance or absence

Reform - PGPA Act 64 Accountability 45 Reform – developments: external 36 Reform - history 23 Reform - implementation theory 15 Performance - joined up 14 government: CW and S and Ts Reform - Capability Reviews 13 Performance - paradigm shift 5 (mentioned) Government - joined up 4

One example of a recent management reform that became disrupted was Ahead of the Game [AGRAGA, 2010]. Although initiated by one Labor Prime Minister in 2009, it was largely de- funded by his successor Labor Prime Minister in 2010 and not at all by the later change to a Liberal National Party Coalition Government (Horne, 2010). This was summarised simply by a former senior Secretary (#2): “The reason we stopped using the phrase Ahead of the Game is because the government of the day didn’t use the phrase Ahead of the Game”. The factor of maintaining the momentum of management reform over time was not observable.

Page | 147

Time and maintaining reform momentum are implicit factors in initial implementation. Changes in government can disrupt both momentum and the changes required to all the sub- systems of a public sector system (Fernandez & Rainey, 2006). Varying time periods can be required for implementation and this study revealed that the inconsistent assessment of ‘implemented’ is affected by the meaning attached to ‘time’. Past use has not been specific about extended time in implementation. Time can be defined in three ways: (1) the initial implementation period, e.g. ten years (Keating, 1990; TFMI, 1992); (2) over a longer period of twenty years, after which being regarded as implemented (Bovaird & Russell, 2007); (3) the eventual embedding of reform in agency practice that is long-term but not specified (Kotter, 1995; Marsh & McConnell, 2010). Differentiating these varying periods of time connects implementation over extended time with effective reform, by adapting the use of longitudinal (Pettigrew, 1990) and avoiding research based on single, one-off events. This research has adapted Pettigrew’s meaning of longitudinal, as the extended time needed to demonstrate the longer-term implementing and embedding of the ‘managing for results’ reform.

Conclusions about ‘implemented’ or ‘embedded’ are affected by including extended time. This lens of extended time is exemplified in the PGPA Act 2013, which specifies the place of time in the annual statements of an agency’s non-financial performance. Section 38(1) of that Act mandates: “The accountable authority of a Commonwealth entity must measure and assess the performance of the entity in achieving its purposes”38. This assessment of planned corporate performance covers four years: “The corporate plan for a Commonwealth entity must cover a period of at least four reporting periods for the entity, starting on the first day of the reporting period for which the plan is prepared under paragraph 35(1)(a) of the Act” (Finance, 2017b; PGPA Rule – S. 16E39). Being longer than the existing one-year requirement of an agency’s Annual Report, this period of four years introduces the factor of time in demonstrating agency performance. The initial planning of an agency’s performance is linked with later evaluating its effectiveness and introduces a specific timing factor that was absent in the previous ‘managing for results’ reform of the 1980s. What works could guide future reform practice.

This case study of ‘did it work’ has also examined the reform series and its element of time. Connections made between this series of repeated reform requirements demonstrated the

38 http://www.finance.gov.au/resource-management/pgpa-act/38/ 39 http//www.finance.gov.au/resource-management/pgpa-rule/16E/ Page | 148

effectiveness of APS performance and the implications of learning from past reforms. A former SES officer (#5) agreed “we’ve had devolution, centralisation, you know this that and the other sort of thing that will change from time to time. But there’s a huge amount of common ground I think in it over the long term. The changes are often just going around in circles and in that sort of context it’s possibly useful to sort of draw attention to past experience”. A current SES officer (#1) with responsibilities for implementing the PGPA Act observed this repetition: “And sometimes notions or ideas sort of cycle through the system, disappear, come back again, expressed in slightly different ways, or in slightly different forms. ……….So I think the system, let's say it's offered reform ideas on a platter, it chews over them, it ingests some, and it spits out others. But some of the stuff that's spat out comes back later, in some other form”. This was also noted by an outside specialist in evaluation theory: “So we’re not using learnings from previous experiences to help us. In many cases, the reform is coming as a completely different direction for a different purpose, and they are at cross purposes with the last one” (Current academic in evaluation). Only one interviewee identified the similarities between previous reforms and the current PGPA Act: “So PGPA is not a vast difference. As with everything, same applied with FMA [the former Financial Management Act, 1997-2013], the difficulty is not the legislative framework but in the implementation” (Former SES officer #6 in Finance Department during MfR reforms). This mixture of research and practice-based commentary has identified similarities between the 1980s reform of ‘managing for results” and the PGPA Act 2013. These similarities follow.

In the former, performance was demonstrated by evaluation; in the latter, by law. Effective program performance was demonstrated through formal evaluation of this performance (Mackay, 1992), but in the latter through its Section 39(2) requirements of annual performance statements that “provide information about the entity’s performance in achieving its purposes” (PGPA, 2018). Although lacking this historical perspective, these connections were recognised in the 2018 review of the implementation of the PGPA Act: “It is not clear why evaluation practice has fallen away, but it can be reinvigorated through attention from the top, including from the Secretaries Board, accountable authorities and ministers” (Alexander & Thodey, 2018: p.14). This has since been accepted by the central agency responsible for this implementation: the Department of Finance (Morton & Cook, 2018). These similarities contribute to potentially enhancing implementation theory through a perspective of extended time, in relation to meanings of ‘implemented’, or ‘effectiveness’, or ‘embedded’.

Page | 149

6.14 Conclusions This chapter has been framed by two research aims. The first reviewed the contribution of institutionalisation (‘embedding’) from such change management frameworks as Armenakis et al., (2000), Fernandez & Rainey (2006), Kotter (1995) to the present frameworks of management reform implementation. The second examined if the changes started by management reform could endure long enough to become embedded, by adapting the longitudinal perspective of Pettigrew (1990) as extended time. The study revealed that no model of implementation had been applied thirty-five years ago in the APS practice of ‘managing for results’. The stages of that APS management reform were initially implemented through senior change agents, but thereafter became vulnerable to the presence or absence of those same change agents as the later managers of change. Changes of governments meant changes in APS priorities for reform (Horne, 2010; Mackay, 2011) and disrupted earlier reform implementation. Significant individuals such as Ministers Dawkins (1985) and Tanner (2008) and Secretary Keating (1989, 1990, 1995) promoted the changes stemming from a reform. These changes in government and individuals identify that different structural means are required to result in reform being institutionalised, or embedded.

A balance was provided between practice and theory. Practitioners’ responses about APS reform implementation were compared with those from three academic sources of research into these reforms. Former Secretaries, senior management, former and existing APS staff and private sector consultants provided a reflective source of comparing reform experiences during the 1980s MfR reform with those currently under the PGPA Act. These comparisons revealed a significant research-practice gap in implementing APS management reform and the factor of ‘embedded’ remains to be operationalised.

In their roles as the top-down initiators of reforms, former Secretaries demonstrated that theories of either implementation or change management had had no impact on their implementation practices in the APS. That lack of applied research frameworks for reform implementation also introduces elements of subjectivity and randomness into the practices of those individual Secretaries undertaking reform implementation. As outlined in those interviews, several reforms were started without the implementers first agreeing on detailed implementation plans or measurable indicators of implementation performance and success. Implementation practice became a subjective choice initially of the Secretary at the time.

Page | 150

This identifies larger concerns about the consequential impacts of those reform change agents and their systemic implementation frameworks. A distinction in Australian implementation language thus developed from these interviews, with implications for the extended timing element of Pettigrew’s longitudinal framework. The findings lend support to the need for “robust conceptual models to understand public sector reform” (O’Flynn, 2015: p.21). For their cross-disciplinary implications, these interim conclusions are discussed in more detail in the following Chapter 7 – Discussing the Common Clues to Embedding APS Reform.

Page | 151

Chapter 7. Discussing the Common Clues to Embedding APS Reform 7.1 Introduction This research explored the core issue of whether the 1980s management reform of program evaluation become embedded in the APS. By its title, “The Secretary Said: Make it So”, this thesis challenged assumptions that an APS Chief Executive Officer can implement the changes required of a management reform to ensure they are embedded as permanent practice. Chapter 6 concluded evaluation had not been embedded in permanent APS practice, for the following reasons. First: no theories of either implementation or change management were utilised in the reform practices of APS Secretaries. Second: long-term implementation was vulnerable to the limited tenure of those initial Secretaries commencing management reform. Third: there was a gap between ‘implemented’ and ‘embedded’, in defining the outcomes over time of management reform. Fourth: the long-term impacts of those APS management reforms have not been formally evaluated, as to their effectiveness in achieving the original objectives. It is now accepted no past APS management reforms were embedded (Finance, 2014). While there has been extensive research on the implementation of public sector management reforms (e.g. Aucoin, 1990; Barrett 2004; Christensen & Laegreid, 2007; Hupe, 2011; Matland, 1995; McTaggart & O’Flynn, 2015; O’Toole, 2000), reform practice remains “surprisingly hotly contested” (Hood & Dixon, 2015: p.6). By introducing the factor of extended time (Pettigrew, 1990) into implementation theory, this chapter makes three contributions to new knowledge.

These three contributions enhanced implementation theory and public sector management reform. First: the changes of management reform need to be implemented uniformly throughout the geographically dispersed APS. Second: reform changes should be continued beyond the initiating change agents (momentum maintained over extended time). Third: the longer-term reform impacts are tested by formal evaluation (changes are embedded). The common links in these conclusions about managing reform changes are discussed in this chapter, by examining how change theory contributes to what is not happening in APS practice. It also reviews what the implementation of that APS reform should have included in practice.

This chapter identifies the long-term impacts of reform change agents and enhances existing implementation frameworks. There is a research-practice gap in implementation theory, which lacks a framework for both implementing and embedding management reform. This research contrasts current implementation theory with APS practice and contributes new knowledge to address that gap. Through closer analysis of the research findings, this chapter contributes Page | 152

factors found to be missing in management reform, of reform momentum, extended time and evaluating embedded outcomes. This chapter finishes with insights into current implementation theory and APS practice, providing cross-disciplinary linkages that may enhance both. It begins by answering the first question about embedding the 1980s reform of program evaluation.

7.2 Embedding APS Management Reform This study found program evaluation was not embedded in permanent APS practice. Program evaluation was a critical component of the 1980s MfR reforms, demonstrating program performance and the APS’s effectiveness (Keating & Holmes, 1990; Mackay, 1992). It introduced new responsibilities on senior APS managers to show the results achieved in their programs. MfR also allowed those managers increased autonomy in those functions, because decision-making was devolved from the APS centre (Finance and Prime Minister’s) to departments and internal managers (Guthrie & English, 1997; Stewart & Kimber, 1996). Evaluation provided critical information on agency accountability to both Secretaries and the Parliament (ANAO, 1997). Evaluation was both a reform of management performance and the means of assessing the effects of a reform (TFMI, 1992; Sedgwick, 1994). As the case study for this research, it had clear dates of implementation (1984) and finalisation (1996).

Contrasting claims had been made about the impact of the overall managing for results reform. Initial claims to have reformed APS practice (Keating, 1990; TFMI, 1992) contrasted with the conclusion shortly afterwards that there was “an urgent need to press home the changes and embed them more firmly in the working culture of the Public Service” (MAB, 1993: p.v). This requirement to conduct program evaluations was no longer required in 1996 due to changes of both government and Finance Secretary, who focussed on implementing the newest reform: accrual accounting. Although not formally abolished by that central Finance Department, the conduct of program evaluation by line Departments became spasmodic (Mackay, 2011). The shortness of that MfR reform over ten years (1984-1996) identifies missing factors of ‘extended time’ and ‘embedded’ in management reform of the geographically-dispersed APS. These missing factors contributed by this research are considered in detail next.

7.3 Implementing Evaluation not Long-Term Extended time and geography were only indirect factors in initially implementing and attempting to embed the evaluation component of MfR. The factor of ‘embed’ discussed so Page | 153 far raises the question: embed where? It was implicit in conclusions by Pressman & Wildavsky (1973) about implementation going awry between head office in Washington DC and Oakland in California. It is not so much having the agreement of APS Secretaries at the top, as ensuring that the reform changes have their desired impact on practice in the eighteen agencies of the dispersed APS, where sixty-two per cent of staff are located in the States and Northern Territory (APSC, 2018a). Geography is yet to receive detailed research attention as an implementation factor in senior management spans of control in a large country such as Australia.

In Australia, those factors of time and geographic location became clearer in the 1992 evaluation of the MfR. That evaluation report concluded the reforms had neither been continued long enough, nor explained to APS regional staff, who had “often felt the impact of implementing the reforms without the broader perspective to carry them through” (TFMI, 1992: p.487). This can be contrasted against the desired last stage of management reform changes: that they become institutionalised and permanent practice in the agency’s culture (Kotter, 1995). That last stage was adapted as the ‘embed’ of this study, meaning a permanent change of practice by all public sector staff. That MfR evaluation Report concluded implementation had been started, but further work was needed to reinforce that management practice in APS culture (TFMI, 1992). This study has identified the following two gaps.

First were gaps between the APS centre in Canberra and the periphery in APS regional offices. Such distances of thousands of kilometres are indicative of both lengthy spans of management control and the need for unified management information systems reporting on reform outcomes. Second were gaps between management initiating reform, but not maintaining the later momentum of change needed for long-term embedding. One contribution of this study revealed differences over time in meanings of implementation: between initiating reform and its later embedding. This differentiated between any later conclusion of ‘implemented’ and achieving permanent changes of practice. The results of this study contribute to re-framing implementation theory with additional factors of centre and geographic periphery, extended time and evaluation. The top-down, long-term influence of the APS centre on change in the devolved APS is reviewed next. This distinguishes between the influence of the APS Head and that of a departmental CEO: the Secretary.

Page | 154

7. 4 Intended Influence of the APS Centre on APS Secretaries Direction by the APS centre has shifted to its attempted influence on these devolved APS managers. Devolution of management responsibilities from the centre (such as the Finance Department) to Secretaries was explicitly intended by the 1980s MfR era (Hamburger, 2007; Keating, 1995). That split in responsibilities for implementation and management challenges the effective, long-term implementation of APS-wide management reform. This devolution of management responsibilities resulted in variation in agencies’ performance and a fragmented APS, without a whole-of-government approach to wicked problems crossing single agency boundaries (Head & Alford, 2015; Podger, 2018; Radin, 2003). Over twenty years, there were repeated attempts by the centre to implement standards of whole-of-APS administration. Examples included the “Better Practice Principles on Program Evaluation”, (ANAO/DoF, 1996), which was issued at the same time as the abolition of Portfolio Evaluation Plans by the new Government in 1996 (Mackay, 2011). Shortly afterwards, program evaluation was re- affirmed as “one of the critical tools available to assess program performance” (ANAO, 1997: p.xi), indicating some re-assertion of the centre’s influence. That 1996 decision illustrates two factors in this research: the top-down nature of APS reform and the ease of removing one.

This contrasts with the attempted central embedding of reform by Better Practice Guides. (BPGs). Issued by the Auditor-General, these Guides were “important source documents for managers operating in an environment of devolved authority and responsibility” (Barrett, 2004b:12). They aimed “to improve public sector administration by assisting entities to perform at their most efficient level, through the adoption of better practices to transform and improve business processes” (ANAO, 2017b). These BPGs demonstrate the tension between policies intended by the centre and the reality of their practice within a particular agency. This fragmentation was evident in Reviews by the Australian Public Service Commission of APS agencies’ capabilities (APSC 2013a; Tiernan, 2015b). Those Reviews showed gaps between the APS centre and those agencies that also contribute to fragmenting reform implementation.

The gaps were that agencies’ self-perceptions did not match the external Review Teams’ conclusions. Significantly, the APSC concluded: “looking across all the reviews conducted to date, agencies have viewed their capability a little more positively than the assessment given in the findings of the final reviews. Agency self-assessments and final reviews have been most closely aligned on delivery and most disparate on strategy capability. Agencies generally rated themselves much higher on their assessment of outcome-focused strategy than the reviews Page | 155

ultimately did” (APSC, 2013f: p.208). Such tension between central agencies’ intended influence and any actual implementation by line departments may have added to the difficulties of Boston (2000) and O’Flynn (2015) in finding demonstrable results from public-sector management reforms. Central agency roles and their influence on the decentralised APS are under-researched in Australia (Wilkins et al., 2018). This study found that a framework for managing whole of APS management reform change was not apparent.

7.5 Managing Whole-of-APS Change Under the Public Service Act 1999, the APS is represented as a single entity. Consequently, this research examined the sources and influences on implementing management reform so that it becomes uniform across that single APS. A key factor in implementing top-down reform is the influence on APS Secretaries of their superior: the Secretary of the Prime Minister’s Department (Hamburger et al., 2011). This Secretary is the chair of the collective APS Secretaries Board, whose members are expected to be change agents and leaders of reform (APSC, 2014b). To a limited extent, only the changing role and functions of that central Department has been debated (e.g. Keating, 1995; Tiernan, 2006; Weller, 1983; Yeend, 1979). By contrast, the impacts of changes in the person of its Secretary for sustaining whole-of-APS management reform momentum have not been researched. A finding of this study has extended the significance of ‘extended time’ from organisation change to personal occupancy, by examining the increasingly short-term occupancy and influence of that APS head.

The influence of the APS Head is initially the key to implementing whole-of-APS reform that endures to become embedded. In the thirty years between 1935 and 1968, there were four PM&C Secretaries, while there were double that number at eight in the thirty years between 1986 and 2016 (figure 18). The increasingly-rapid turn-over in the occupant of PM&C Secretary has the potential to disrupt the personal impact of that APS head. The centre’s impact on those decentralised APS agencies was already limited by management devolution from the centre to Secretaries (Morrison, 2014; Podger, 2018). Because of the size of the APS and its dispersion (Figure 1, chapter 2), implementing uniform management change across all the departments of the APS so that it becomes embedded is complex.

Such complex reform change requires maintaining reform momentum. That momentum must be maintained beyond the initial agreement by the Secretaries group around the APS head, or the later actions by an agency’s SES reporting to their Secretary, or by the Executive Level Page | 156

staff managed by their SES 1 Branch Head in Canberra (Figure 2, chapter 2). This study found that such systems (of influencing the APS beyond those Secretaries and their next management level of the SES) were theorised as being needed but were not further identified. This identifies a gap in the long-term management of uniform APS reform change that is embedded. Neither central nor uniform management of the APS for the long-term could be identified.

The management and achievement of long-term APS change starts at the top: the APS Head Figure 18 illustrates factors of turn-over and time in office affecting priorities of the APS Head. Page | 157

Secretary of Prime Minister’s Department and Head of the APS Figure 18 Tenures and Medians 1912-2019

Order Name Date Started Date Ceased Term in Office MEDIAN

1 Malcolm Shepherd 1 January 1912 27 January 1921 9 yrs, 26 days

2 Percy Deane 11 February 1921 31 December 1928 7 yrs, 324 days

4 years 3 Sir John McLaren 1 January 1929 2 March 1933 4 yrs, 60 days 60 days

4 John Starling 3 March 1933 10 November 1935 2 yrs, 252 days

5 Frank Strahan 11 November 1935 25 August 1949 13 yrs, 287 days

6 Sir Allen Brown 25 August 1949 31 December 1958 9 yrs,128 days

7 Sir John Bunting 1 January 1959 10 March 1968 9 yrs, 69 days

3 years 8 Sir Lenox Hewitt 11 March 1968 12 March 1971 3 yrs, 1 day 1 day

TOTAL 1935 to 1968 Four

9 Sir John Bunting 17 March 1971 31 January 1975 3 yrs, 320 days

10 1 February 1975 30 September 1976 1 yr, 242 days

11 Sir 1 October 1976 12 April 1978 1 yr, 193 days

12 Sir 18 April 1978 10 February 1986 7 yrs, 298 days

5 years 13 Mike Codd 10 February 1986 1 December 1991 5 yrs, 294 days 294 days

14 Dr Michael Keating 1 December 1991 13 May 1996 4 yrs,164 days

15 Max Moore-Wilton 13 May 1996 20 December 2002 6 yrs, 221 days

16 Dr Peter Shergold 10 February 2003 28 February 2008 5 yrs, 20 days

17 Terry Moran 3 March 2008 5 September 2011 3 yrs,186 days

3 years 18 Dr Ian Watt 5 September 2011 30 November 2014 3 yrs, 86 days 86 days

19 Michael Thawley 1 December 2014 23 January 2016 2 yrs,155 days

3 years 20 Dr Martin Parkinson 23 January 2016 31 August 2019 3 yrs, 212 days 212 days

TOTAL 1986 to 2019 Eight

Source: https://en.wikipedia.org/wiki/Secretary_of_the_Department_of_the_Prime_Minister_and_Cabinet_ (Australia) (originally accessed 5 May 2017)

Within the time-frame of this research (since the MfR of 1984), turnover in the head of PM&C has increased. The corresponding periods of each of its personal occupancy have been reduced, Page | 158

from over five years to three. Changes in the Secretary of Prime Minister’s Department have occurred with a median between three and six years, contrasting with the time for implementing effective management reform being from twenty to thirty years (Bovaird & Russell, 2007; Hood & Dixon, 2015). Those changes highlight the potentially short-term influence of that person on the APS and a consequent disruptor of implementing long-term management reform.

Changes in Government disrupted the MfR reform of evaluation in 1996. Much had been made of the changes in APS culture resulting from the ‘Managing for Results’ reform (Ives, 1995b; Keating, 1990; Sedgwick, 1994; TFMI, 1992). Only shortly afterwards, changes of government and the Secretaries of both Prime Minister’s and Finance led to APS program evaluations being no longer mandated. The government in 1996 considered that performance reports from outsourced private sector delivery agents would provide the same information (Mackay, 2011). Interviewee comments drew out how changes in the APS Head affected the implementation practices of APS Secretaries, showing that changes in this person can lead to short-term perspectives in the implementation priorities of other APS Secretaries. Comments on these various occupants were not always favourable. The short-term tenures in the occupants of PM&C Secretary as APS Head have influenced the continuity of reform implementation. As APS Head, that Secretary influences other APS Secretaries.

APS Secretaries are necessary but not sufficient agents of permanent APS change. They are key change agents for initiating management reform, but the pace of momentum can depend on individual Secretaries and their continued occupancy of that senior position. Interviewees specifically noted that the sponsor of the “Managing for Results” reform (Dr Keating) was Finance Secretary for five years, then promoted to Secretary of PM&C and APS Head for five more years. This provided a personal momentum of ten years for this MfR reform change. That later change in reform practice (its abolition) resulted from the change of Government and its evaluation requirements in 1996, which was coupled with changes in the heads of the APS (PM&C Secretary) and the Secretary of the Finance Department (the earlier initiator of the MfR reform). These are two key figures in the continuation of APS reform.

Their association revealed the impact of individual APS Secretaries on implementing reform. This finding was confirmed by analysing the implementation of a later management reform “Ahead of the Game” (AGRAGA, 2010). “Ahead of the Game” was initiated by a change of government in 2007 and implemented by the PM&C Secretary, who occupied that position of Page | 159

APS Head for three and a half years. A senior former Secretary (#3) explained that this reform was not a priority for the next Government and consequently not for the APS. These two changes demonstrate the vulnerability of public sector reform to both top-down implementation and to disruption in the short-term change agents. Consequently, this study contributed a difference in terminology: between implementation as the act of starting a management reform compared with steering it through for the long-term change to become embedded.

This study contributes to research on long-term APS change, by both the centre and Secretaries. One contribution is the APS-wide influence of changes in the PM&C Secretary, which was found to be both positive and negative. Changes in the Head of the APS could affect the implementation practice of the APS Secretaries. These changes demonstrate the vulnerability of public sector reform frameworks both to top-down implementation from the centre and also disruption by other Secretaries as short-term change agents. Interviewee comments on these other Secretaries were not always favourable. Devolution of management responsibilities from the centre to individual Secretaries has increased the centre’s difficulties to make uniform change in those decentralised agencies (Morrison, 2014; Podger, 2018). This revealed the implementation risks with management reforms intended to be APS-wide. The consequent role of those Secretaries as change agents within their agencies is examined next.

7.6 APS Secretaries as Agency Change Agents This study found management reforms were begun without sufficient attention being given to implementation planning. Interviews with former APS Secretaries revealed that reforms were commenced without plans for achieving their embedding. That is one responsibility for good governance of APS leaders (APSC, 2013b; Briggs, 2007b; MAB, 1993; MAC, 2010). As identified in figure 13 (chapter 6), the commonalities in these management reforms and their repetition led to the conclusion that APS Secretaries have not exercised long-term leadership in embedding such organisational governance. The repetition of the commonalities in those management reform also demonstrates some corporate amnesia in the APS (Barrett, 2001b; Pollitt, 2000; Tingle, 2015; Wettenhall, 2011). This illustrates the importance of long-term corporate memory about APS management reform which endures long enough to become embedded in APS practice. For example, Secretaries generally did not continue with program evaluation after Portfolio Evaluation Plans were no longer required in 1996, exemplifying the loss of long-term momentum for reform change. The common factor is extended time. Page | 160

A vulnerability in time is the short length of service of current APS staff. Their current median length of service is eleven years (APSC, 2018c). The implications are under-researched in Australia and have only recently been quantified in the United Kingdom civil service (Sasse & Norris, 2019; Stark & Head, 2018). This can be linked with the corporate amnesia about past reform discussed in Sections 7.7, 7.12, 7.18 and 7.19. By contrast, past research has focussed upwards: on the responsiveness of APS Secretaries to Government (Althaus & Wanna, 2008; Aucoin, 2012; Grube & Howard, 2016; Keating, 1999; Mulgan, 2008, 2010; Tiernan, 2012). Occasionally, this focus has been downwards: on the effects of change on lower-level staff (Brown et al., 2003; Carey et al., 2018; Orazi et al., 2013). It has not included responses by staff to changes in their Secretaries and associated agency priorities. Data from this study reveal differences in implementation responsiveness between Secretaries and staff.

Initially, respondents commented on common priorities of Reform and Performance. As outlined in Figure 19, former Secretaries supported reforms and emphasised extended performance (at fifty-seven per cent, a combined majority of their responses). This was equalled by the ‘all others’ interviewees, at sixty-one per cent. Through a more nuanced analysis using all forty-three of the open codes (Appendix 9) and the axial codes, finer distinctions were then revealed between ‘Secretaries’ and ‘all others’, sixty-two per cent of whom were either serving or former APS staff (figure 9, chapter 5). These distinctions follow. Figure 19 Ranked Priorities (a) Former Secretaries (b) All Others TIMES TIMES (a) THEME RECORDED (b) THEME RECORDED REFORM 128 (40%) REFORM 450 (34.5%) PERFORMANCE 56 (17.5%) PERFORMANCE 346 (26.5%)

GOVERNMENT 53 (16.5%) GENERAL 203 (15.5%) AND MINISTERS GOVERNMENT MANAGEMENT 41 (13%) 141 (11%) AND MINISTERS GENERAL 27 (8%) MANAGEMENT 85 (6.5%)

POLICY 7 (2%) PARLIAMENT 32 (2.5%) PRIME 5 RESEARCH 26 MINISTERS RESEARCH 3 POLICY 21 PARLIAMENT 1 PRIME 15 MINISTERS Given their executive and advisory responsibilities, former Secretaries unsurprisingly rated the priorities of government, Ministers and management higher than other (former) APS members. Page | 161

Six codes were thirty-four per cent of former Secretaries’ responses. Concerning Reform, former Secretaries most often commented on ‘Embedding’; ‘Change Agents’; ‘Agenda’; ‘Implementation: in terminology’, while on Performance: ‘Including Evaluation’ and ‘Long- term’. By contrast, five codes were thirty-four per cent of ‘all others’ responses. They prioritised the APS as a system (APS – ‘culture and roles’), while recognising the top-down origin of reforms (Government – ‘influence on reform’), the importance of demonstrating impact (Performance – ‘Including evaluation and Framework’) and the APS reform series (Reform – ‘Embedding’). Each set of respondents showed a high awareness of change agents’ influence in implementing reform. In the category Reform, fifteen per cent of Secretaries’ responses included change agents, while eleven per cent of all others mentioned them. A closer analysis of the open code relating to ‘reform-embedding’ showed that former Secretaries were twice as concerned (eight per cent) about this outcome as were ‘all others’ (four per cent). By reviewing the influence of changes in Government on APS reform priorities, this research contributes differences between the initial agent of change and the later management of change.

7.7. Differences between Change Agents and Change Management There are differences between the change initiator (Secretary) and change manager, or consolidator, of reform. Rather than by single leadership from the top, a change team with a mix of skills may alternatively be better at changing whole-of-agency culture (Mento et al., 2002). Those middle managers of reform implementation and members of their teams require different skills (Pick & Teo, 2017; Rouleau & Balogun, 2011). They can also be significant change agents in their own right (Buick et al., 2018). To embed change, such staff need to continue implementing reform in the long-term, but turn-over of staff and its negative effects on long-term corporate memory have been largely ignored in policy research (Stark & Head, 2018). This study did not find change teams were used and instead change was uniquely identified with individual senior APS managers.

The time in office of those individual initiators of reform does not necessarily correlate with the time of decades required to embed the changes of systemic management reform. Given the significance of top-down implementation, turn-over at the top affected the continued momentum of implementing a nominated reform. Subsequent informal data has now quantified the turn-over in both Ministers and APS Secretaries: “in the four years since the coalition government came to power in Australia [2013], there have been 42 changes of minister and 19 changes of secretary. No department has the same minister and secretary in Page | 162

place as when the government came to power. And under the previous Rudd/Gillard government there were 36 changes of minister and 22 changes of secretary” (Kamener, 2017). The personal occupancy and on-going influence of individual Ministers and Secretaries suggest an area of further research. These contributions link three factors: the initiators of reform change, with the staff affected by that change and its later management, with the extended time of decades required to achieve embedded reform. Alternatively, these links may be observed elsewhere (other than in the implementation literature) in the current stages of the policy cycle.

The public policy cycle consists of five stages of change. These are “agenda setting; policy formulation; decision-making; policy implementation; policy evaluation” (Howlett & Cashore, 2014: p.23). Achieving whole-of-APS reform change in practice has lacked application of the policy cycle framework for managing change and evaluation of reform results. This research identified that effective change management and whole-of agency impact must be implemented in both central offices and their regions, at all levels of those staff. Long-term management reform changes need to be maintained in two areas: the collective APS and within an individual agency throughout its offices. These two contributions of this study reflect a gap between those who initiate a management reform and those implementing its changes.

Reform changes need to be accepted by all APS staff, who need to be involved actively. Management of such systemic change especially involves the majority of those APS staff who are in the regions, for the implementation of management reform to be effective for the long- term. Although this was identified as lacking in the MfR evaluation (TFMI, 1992), no interviewee was able to identify any means of accomplishing this and it remains an area for future research. As outlined earlier, this research did not identify any methods used to embed reform in APS agencies. This contrasts with Secretaries’ nominal responsibilities both to manage their agencies and to implement reforms. A former SES officer (#4) noted: “one of the key systemic findings from the Capability Reviews was that people weren't connecting their work to the mission of the organisation. So you need that discipline cascading of strategic intent, so it ultimately breaks it out in to someone's work. And that was systemically missing”. Also missing in the research data were those links between Secretaries as change initiators and managing effective change throughout an agency. This gap is reviewed next.

Top-down imposed change does not lead to embedded change. There is a gap between initiating management reform and embedding any changed practice across the APS. Effective reform Page | 163

requires these two linked factors of maintaining momentum amongst staff and utilising middle managers to bring staff on side (Barker et al., 2018; Pick & Teo, 2017; Podger, 2004; Rouleau & Balogun, 2011). It also requires the absence of two negative factors: the negative effects of staff turn-over on long-term corporate memory, or the resulting corporate amnesia (Henry, 2015; Pollitt, 2000; Pollitt, 2009b; Tingle, 2015; Wettenhall, 2011). A contribution of this research links the forgoing as follows. Talking with staff about reform change is necessary, but not sufficient and leaders should allow chaotic change centred on staff (Karp & Helgo, 2008; Stewart & Kringas, 2003). Change must be communicated to employees through their direct supervisors (van der Voet et al., 2016). Such change efforts may encounter a variety of responses from staff, who may be variously change champions, converts, doubters and defectors (Jansen et al., 2016). Although those middle managers play an important role in making sense of strategic change, their active involvement was not apparent from this research.

Long-term strategic APS change was not achieved by the management reform of evaluation. The claim of ‘implemented’ was made only ten years after its initiation (TFMI, 1992). This is arguably too soon and more time was required to embed public sector change and make it stick (e.g. Hood & Dixon, 2015; Ilott et al.2016; Pollitt, 2013). The data revealed that the initial designated agents of change neither designed nor completed the last stage of change management frameworks: the institutionalising of reform (Fernandez & Rainey, 2006; Kotter 1995). The impacts of changes in key change agents, both of the APS head and the individual APS agency heads, are further key contributions to initiating and maintaining the changes of an APS reform. Reform change was found to have been disrupted after their departure and the consequential impacts on changing staff priorities. Not previously researched, the short-term presences of those two categories of APS change agents are findings examined next.

The short tenure of the APS Head is a missing factor in achieving long-term reform impact. The APS Head chairs the Secretaries Board and the implementation of APS reform (APSC, 2014a). The tenure of that APS Head was compared with the extended time to implement the changes of embedded management reform. This study identified short-term tenures in that Head (Figure 18) and considered if the momentum of reform implementation might decrease amongst those APS Heads. The obverse of this short-termism has been noted: “The continuity of office holding [by Secretaries] was a major factor in allowing reform to advance and be sustained during the first two decades” [of 1980s-2010s] (Halligan, 2018: p.250). It follows that reform momentum would be slowed within individual agencies by this turn-over of their Page | 164

senior managers. This is a negative factor in change management frameworks that are outlined next.

7.8 Change Management Frameworks, Geography and Extended Time Nominated change agents were found to have initiated management reform. The long-term continuation of a reform was, however, found to be vulnerable to any later changes of those same senior managers at the APS centre. Disruption in those senior personnel also revealed a lack of extended time in earlier conclusions of ‘implemented’ (TFMI, 1992). Equally found to be absent was the factor of geography in implementing reform change for whole-of APS impact, where effective change management includes both central office and the regions. With sixty-two per cent of the APS staff being in those regions (APSC, 2018a), most are located outside the centre of Canberra and the direct influence of Secretaries. This is a new geographic factor found to be absent in any theoretical analysis of implementing or embedding reform.

The MfR reform studied in this research was deemed to have been implemented in 1992. An unexplored reservation in that report was whether regional staff accepted and practised that management reform (TFMI, 1992). A later part of that Report concluded the reforms had not been continued long enough, nor had they been explained to regional APS staff, who had “often felt the impact of implementing the reforms without the broader perspective to carry them through” (TFMI, 1992: p.487). Despite interviewees being asked from their reform experience to comment on “What actual implementation methodology was used”, none referred to any methodology specifically relevant to staff in those regions. A contribution of this research has identified an implementation gap: between the centre and the geographic periphery. Ensuring effective and embedded reform consequently requires extended time to have impact of these dispersed staff.

Independent of reform, the study found a short-term attention perspective in the APS. Specifically, interviewees noted that the main current priority of the APS is being responsive to the priorities of the Government of the day. In responding to those government priorities, this short-term perspective is at variance with the long-term perspective already identified for effective implementation of management reform. This finding contributes to emerging concerns about making a reform stick (Hughes, 2016; Lindquist & Wanna, 2011; Pal & Clark, 2015). A consistent element of change management frameworks has been embedding, or institutionalising, change (Fernandez & Rainey, 2006; Kotter, 1995). No framework for Page | 165

achieving this outcome was set out. This study found there was a difference between initiation and embedding that may be linked with initial change readiness.

As an implementation factor, change readiness was found to be absent in the data. By first establishing such preliminary change readiness, agency management may be more successful (Cinite et al., 2009; Oakland & Tanner, 2007). Instead of an APS Secretary announcing the reform as ‘make it so’, change readiness is important in managing the expectations of all staff (Rafferty et al., 2013). The change initiator (the Secretary) and those needed to consolidate and embed that reform are different. In practice, achieving whole-of-APS change has lacked application of change management over extended time that includes all staff. This research finds that a new element of extended time contributes to change management. This contribution connects staff management, maintaining reform momentum over geography, with implementing effective reform, which needs extended time for achieving these results. The insights from this research about extended time as a new factor in implementation theory are discussed next.

7.9 Implementation Theory through a Lens of Extended Time Extended time was not identified by the interviewees as an implementation factor. The literature identifies extended time as an important factor, but while some interviewees expressed time-related views, none was specific. Former APS staff aware (in 2016) of the evaluation of the MfR reform (in 1992) only commented on its being undertaken and not on the ten-year period it evaluated. This decade of implementation practice can be contrasted with research suggesting that fifteen years was too short and an extended time of twenty years would be required to assess success (Boston, 2000; Bovaird & Russell, 2007). The adaptation of Pettigrew’s ‘longitudinal’ (1990) as ‘extended time’ in this thesis also links with evaluation theory, through the short, medium and long-term time scales of evaluation’s program logic (Figure 7, chapter 3). This is relevant to assessing reform embedding, which is sometimes asserted but not always demonstrated (Hughes, 2016). When the extended time of program logic is included in public sector change management frameworks, this identifies the importance of distinguishing between the initial change agents of reform and the later systems of maintaining the momentum of change.

Maintaining long-term reform momentum was critical to reform success in this research. Reform changes were not managed by Secretaries long enough to become embedded. The Page | 166

importance of extended time is evident in the implementation and demise of program evaluation. Lauded as embedded by the-then Finance Secretary (Sedgwick, 1994), the same officer seventeen years later (as APS Commissioner) found it absent (Sedgwick, 2011a). Mr Sedgwick drew upon the conclusion of the-then Finance Secretary that “a stronger and more strategic, and forward-looking culture of evaluation [was needed] to underpin the reform agendas” (Tune, 2010, slide 8). There is no evidence of any follow-up action.

These absences of long-term attention to implementing APS management reform are of current concern. After this research was completed during 2016-17, a paper for the APS Review (Turnbull, 2018) by the Australia and New Zealand School of Government (Gray & Bray, 2019) examined a similar topic: the current use of evaluation in the APS. Those gaps in long- term APS management reform practice found in this research were supported in that paper’s following three observations. In referring to the 1996 changes previously noted, that paper noted “evaluations ceasing to be systematically used in the budget process. There is some evidence that these led to fewer evaluations being conducted and fewer being published” (Gray & Bray, 2019: p.11). No further analysis was conducted by the authors to identify those reasons for fewer evaluations and the paper then observed “from 2007 there was some recentralisation of public sector management, but there was no systematic change in the use of evaluation in budget processes and no consistent approach to the publication of evaluations” (Gray & Bray, 2019: p.12). Indirectly, these comments supported the earlier conclusions of this research discussed in paragraphs 6.5, 6.9 and 7.4 that identified tensions between the impact of the centre (recentralisation) on practice by individual APS Departments. The following comment in that paper also identified the significance of long-term change management in implementing APS management reform.

This following analysis contributes to the management reform series and failure to embed discussed in this thesis. Gray & Bray concluded that “the continued existence of these significant concerns, after many decades of APS reform, and the recurrent calls for a stronger focus on evaluation, highlights the challenges of achieving change, and a need for greater reflection on why this had failed in the past” (Gray & Bray, 2019: p.13). This research by the Australia and New Zealand School of Government provides current support to the conclusions of this thesis about the importance of extended time in frameworks of implementing and establishing demonstrably effective reform. Gray & Bray’s research is also evidence of current thinking on implementation, management reform and the contribution of evaluation to APS Page | 167

practice, highlighting the continued relevance of this research that also provides bases for further research. The period of time in which an initial change agent undertakes that management reform was found to be a significant factor, for the achievement of those reform results over extended time. This study consequently also challenges the multiple current meanings of time associated with the extended time required for embedding management reform. Three current meanings are discussed next.

There are three meanings of time associated with embedding. They are not specific: achieving success through effective performance management systems (Newcomer & Caudle, 2011); “need to embed reform momentum” (‘t Hart, 2011: p.209); to reform as “the dominant way of life” (Fryer & Ogden, 2014: p.1043). In APS practice, there have been repeated calls for embedding to occur (MAB, 1993; Tune, 2010; Sedgwick, 2011b; Sedgwick, 2014). There were no frameworks with these calls, until the introduction of the PGPA Act in 2013 (Finance, 2014; Finance, 2017c). The notable difference is between the previous central administrative requirements (e.g the CIU: 2006-2015) and the uniform national coverage of black-letter law. In implementing reform with impact, this is the difference between advisory and mandatory. By its requirement to demonstrate non-financial performance outcomes, the nationally mandatory PGPA Act is an embedded reform across the APS (Barrett, 2014). That conclusion by a former APS SES officer during the MfR reform and later national Auditor-General was absent in interviewee responses. Those responses only identified the often-observed need to demonstrate the eventual outcomes from the disruptions identified to management reforms. Such disruptions are revealed in the following management reform cycles in the APS.

7.10 Revealing the Short-Term APS Reform Series The turns of the reform series contest whether the APS is fit for its long-term purpose. This thesis found there were similarities in three whole-of-APS management reforms in three decades. Figure 20 demonstrates these similar priorities of achieving outcomes.

Page | 168

Figure 20 Similarities in Management Reforms Year

Managing for Results (MfR) APS to manage for results 1984-96 (Keating, 1990) and program outcomes

Ahead of the Game, APS must become 2010 Blueprint for the Reform of Australian Government Administration outcomes-based (AGRAGA, 2010)

Public Governance, Performance and The APS must 2013 Accountability (PGPA) Act measure performance (Bradbury, 2013). in achieving purposes

Reforming the management of the APS has not achieved embedded change and this reform series continues to repeat three whole-of-APS management priorities.

This study found that there were three management priorities repeated in those thirty years. As detailed in figure 10 (chapter 6), these repeated priorities were: APS responsiveness to Government; demonstrating APS program performance; the effectiveness of those program performance outcomes. They are not necessarily linked, as ‘responsiveness’ may be short-term (Mulgan, 2008). This would be balanced against long-term APS stewardship responsibilities “to build for the future, continually developing the right capability so that the APS can always deliver the best outcomes for the Australian community (APSC, 2019; Tiernan, 2015b). Alternatively, ‘performance’ may be limited to twelve months in Annual Reports (Davis, 2017; Milazzo, 1992). A structured means of demonstrating ‘effectiveness of outcomes’ can be in program evaluation, (Bourgeois, 2016; DeGroff & Cargo, 2009; Funnell & Rogers; 2011). Program evaluation is now recommended by the Finance Department (Morton & Cook, 2018). These repeated priorities reflect turns of the management reform series (Pollitt & Bouckaert, 2011; Rhodes, 2016; Wettenhall, 2013). This thesis has found a continuous series of APS management reform over thirty-five years, illustrated in figure 21. Page | 169

Figure 21 The Timelines of APS Management Reform: 1983 - 2015/18 Years since Year Main Reform Issue Previous Reform

1976 Coombs Royal Commission.

Hawke Govt statement on 7 1983 APS efficiency, effectiveness, equity, responsiveness. 1984 Financial Management Improvement Program (FMIP: Finance Dept) 1

1984 Creation of Senior Executive Service (APS senior management strata) -

1987 Public Service Board replaced by Public Service Commission 3

1987 Management Advisory Board (MAB) established. -

1987 Introduction of annual agency Portfolio Evaluation Plans 1 - 88 1989 Management Improvement Advisory Committee formed 2 (as a sub-Committee of the MAB) 1990 APS requirement for Portfolio Evaluation Plans (Finance Dept) 1

1992 Evaluation of Decade of Reform (of FMIP = Managing for Results: MfR) -

1993 Introduction of APS core competencies (Public Service Commission) 3

1997 Customer Service Charters. 4

1998 Introduction of Financial Management & Accountability (FMA) Act 1 and Commonwealth Authorities & Companies (CAC) Act 1998 Introduction of Accrual Accounting. -

1999 New Public Service Act 1

1999 Senior Executive Leadership Capability Framework. -

2003  Establishment of Cabinet Implementation Unit (CIU - PM&C) 4

2007 Extension of Secretaries’ contracts, from three to five years. 4

2008 Termination of Secretaries’ performance bonuses. 1

2010 “Ahead of the Game” APS reforms. 2

PGPA Act replaces FMA/CAC Acts – new requirement for 3 2013 Annual Performance Statements (in addition to Annual Report). 2015  CIU abolished (but functional responsibility retained by PM&C) 2 (12) 2018 Prime Minister’s Review: is the APS Fit-for-Purpose? 5 Source: Adapted from Figure 3, Chapter 2.

Common factors in these reforms concerned the APS as a single entity, the central management of it, the competencies of senior executives and staff, or establishing APS effectiveness. Page | 170

The lack of embedding of past management reforms has been indirectly quantified. t’ Hart has concluded: “Every two decades or so, the federal bureaucracy is encouraged to take a good hard look at itself with a view to reforming how it is organised and operates” (’t Hart, 2010: p.1). Closer analysis (figure 21) reveals there is a much shorter reform series in APS practice of two to four years, highlighting two implementation factors. The first is the attention of a implementation agent being diverted by incoming new reforms. This occurred when program evaluation was displaced in 1996 by the introduction of accrual accounting (Mackay, 2011). The second is the importance of consolidating each management reform for the long-term and its embedding. Despite continual APS management reform, the last evaluation was in 1992 (TFMI, 1992). This series continued with new emphasis on APS implementation in 2003.

This was the formation in the Prime Minister’s Department of the Cabinet Implementation Unit (CIU). It was initially seen as a new paradigm in APS management (Halligan, 2005; Wanna, 2006). However, the CIU did not evaluate whether the subsequent implementation of Cabinet decisions achieved their intended outcomes (Wanna, 2006). This study concluded that the CIU’s long-term effectiveness was constrained, as it depended on self-reports from agencies on their progress with implementing Cabinet decisions. Interviewees at all levels were unaware that the CIU had been abolished in 2015 (Gold, 2017). Its functions remain within the Prime Minister’s Department40. The CIU’s twelve-year existence exemplifies three factors examined in this thesis: the series of reform implementation; incompleteness in implementation frameworks and reform impermanence; a lack of learning from evaluating management reforms. It also highlights the disappearing structural means of the centre influencing the devolved APS and its management by Secretaries. A missing factor in this short-term reform series was found to be managing and maintaining reform momentum over extended time. This is now discussed separately.

7.11 Implementation - Missing Factors of Momentum over Extended Time The management of reform and its momentum to long-term outcomes was lacking in the data. A time-related finding in this thesis was that implementing the MfR reform benefitted from the ten years continuity (1986-96) of its initiator, Dr Michael Keating: first as Secretary of Finance (for five years), then later as Secretary of PM&C and Head of the APS for a further five years. His personal role in implementing economic and management reforms during those times has received only limited research (e.g. Goldfinch, 1999; Mulgan, 1995; Waterford, 1991). There

40 At https://www.pmc.gov.au/government/policy-implementation (accessed 14/8/19) Page | 171

were many positive comments in interviews with former APS Secretaries and staff of that MfR period about the personal impact of Dr Keating in maintaining the priority of that MfR reform during 1986-1996. This contrasts with the following limited span of some four months for the penultimate management reform of “Ahead of the Game’, presented in March 2010.

‘Ahead of the Game’ was initiated by then Prime Minister Rudd in 2007 (Halligan, 2010). His then Secretary (Moran) occupied that position for a shorter period of influence of eighteen months (March 2010 to September 2011: figure 18). Relevant interviewees from that period revealed that this reform stopped with later changes in both Prime Ministers (June 2010) and Government in September 2013 (Horne, 2010). The implementation momentum over ten years for the MfR contrasts with that for ‘Ahead of the Game’ being stopped somewhere between four to eighteen months. These periods become relevant to undertaking continuous change management over extended time. Such time is not always a feature of top-down APS reform.

APS reform is initiated by the change agent at the top: the APS head. As Chair of the Secretaries Board, that occupant’s professional impact on his peers is significant, especially for maintaining the momentum of reform change (APSC, 2014a; Barker et al., 2018; Mulgan, 2008; Podger, 2004; Podger 2013; van der Voet et.al., 2015). With the changes of public sector reforms almost being taken for granted (Hupe & Hill, 2016), reviews have either searched for outcomes from decades of reform (Hood & Dixon, 2015), or asked “why reforms so often disappoint” (Aberbach & Christensen, 2014). Formal evaluation has been rare (Briedahl, et al., 2017). One example of a reform which did deliver outcomes over extended time follows.

An example of the success of a long-term focus on implementation is Australia’s experience with reducing smoking rates. Over twenty-five years, smoking rates in Australia has been halved. The following figure 22 also sets out the periodic policy reform changes which, when implemented, have resulted in this long-term decline in smoking. This exemplifies the long- term implementation momentum needed to achieve reform impact and effectiveness and supports the re-purposing of ‘longitudinal’ (Pettigrew, 1990) into ‘extended time’.

Page | 172

Figure 22 Implementing Policy for the Long-Term – with Demonstrable Impact Daily Smoking in the General Population aged 18 years and Older and Key Tobacco Control Measures Implemented in Australia since 1990

Source: Health (2018b) This figure demonstrates one practice-based application of ‘extended time’, in maintaining reforms designed to achieve an original policy goal over decades. The study data did not reveal any frameworks for assessing the effectiveness of management reforms and their embeddedness over the long-term of decades. Discussion of the implications follows.

7.12 Management Reform Missing Long-Term Effectiveness There has been an absence of evaluating the long-term effectiveness of APS management reforms. The last one, evaluating the MfR reform in 1992, occurred after a short period of a decade (Keating, 1990; TFMI, 1992; Johnston, 1998). This study contributes to that gap in research on Australian public sector management reform. Previous research described the initiation of each reform (e.g. Lindquist, 2010; Lindquist, et al., Eds, 2011; Lindquist & Wanna, 2015; Wanna, 2006; Wanna, Ed, 2007). There was no later evaluation of the subsequent outcomes of reform (O’Flynn, 2015). Whether APS management reforms achieve their objectives over the long-term has been overlooked by research on reform and evaluation.

Long-term reform achievement was also missing in comments during the interviews of this thesis. Differences in terminology were identified in this study, between starting a reform and Page | 173

declaring it implemented. At some later stage, implementation was declared complete and re- defined as an on-going program for separate administration. This distinction in unspecified time is not apparent in the implementation literature, where periods of five to ten years, or thirty years, are needed to achieve reform change (Hood & Dixon, 2015; Kotter, 1995). Consequently, any conclusions about management reform being implemented were subjective and the evidence for claiming such systematic change was not apparent.

This lack of evidence supports a key finding of this study: the absence of long-term implementation momentum in program evaluation. This was due to changes in senior Secretaries. Other Secretaries generally did not continue with program evaluation after the abolition of Portfolio Evaluation Plans in 1996, with “only a few remaining evaluation islands among departments/agencies” (Mackay, 2011: p.15). Unfortunately, those individual Departments were not named by Mackay and related to a period after 1996. As such, there have been no studies published on this differential practice by individual APS departments, which suggests an additional area of further comparative research. This chapter has explored the role of Secretaries as change agents and the influence of changes in the Head of the APS on their priorities. Interviewees, however, did not identify any significance in the length of tenure of that APS Head, nor on the consequent priorities of APS Secretaries.

Those Secretaries are responsible for reforming their agencies. The need for such management reform has been repeatedly asserted by government and then considered implemented (AGRAGA, 2010; Dawkins, 1983; Keating & Holmes, 1990; TFMI, 1992). There is considered to be a conflict of interest in those conclusions, as they were made by those same Secretaries who had earlier initiated the management reforms. It was taken for granted by those initiators that the reform would be maintained and result in uniform change in the large and dispersed APS. This research has challenged such an assumption that an APS Secretary can be a long-term, effective change agent and implement such embedded management reform. This thesis did not find any evidence to support such assumptions and, alternatively, found evidence to the contrary. Implementation of APS reform was disrupted by changes at the top, in both heads of the APS and senior Secretaries. This evidence was derived by applying the longitudinal factor of ‘extended time’, to contrast the initiation of management reform with the actual times of decades revealed by research as being required to embed public sector management reform.

Page | 174

This chapter examined whether the program evaluation reform could have become embedded. It has examined what was missing from the initiation and implementation of effective reform and concludes that it is effective change management over the long-term. Recently, Shannon (2017) has suggested that a better understanding of the processes necessary to achieve reform outcomes could result from refocussing reform implementation as change management. What follows are the linkages found in this research between factors of implementation, change management, extended time and evaluating the effectiveness of management reform. As the results from analysing what happens in practice after an APS reform is started, these linkages enhance current implementation and change management theories.

Three theories have been linked with a fourth factor in this study, relating to effectiveness. Those theories of implementation, change management and evaluation have been linked through a fourth factor of extended time and address the APS system to be reformed. They combine reform initiation by top-down change agents, with that initial reform momentum being maintained by involving all staff, the reform changes being continued long enough to affect geographically-dispersed staff and finally, long-term reform effectiveness being established through formal evaluation. Consequently drawing upon the framework of O’Flynn (2015), there are three levels for reforming public sector systems: the macro (top-level reform initiators of agency heads and senior managers); meso (mid-level managers as implementers); micro (central agency monitors and evaluators, both internal and external). All three were interviewed in this study.

Interviewing representatives of all three levels drew out any links in APS practice. The absence of these links is indicated in the conclusion that past APS management reforms were not embedded (Finance, 2014). The effectiveness of public policy is challenged by APS institutional amnesia (Stark & Head, 2018; Tingle, 2015; Wettenhall, 2011). Such corporate amnesia may underlie the repetitions of the APS management reform series (figure 21). A new priority in APS practice is demonstrating effective non-financial performance, with evaluation now acknowledged as one means (Finance, 2017a; Finance, 2017b; Morton & Cook, 2018). These are the current APS contexts for the research problems of this study and its contributions identified.

7.13 Contributions to Theoretical Research Problem The theoretical problem is the framework for successfully embedding effective public sector Page | 175

management reform. This research contributes insights into enhancing implementation theory, by applying a lens of extended time to APS public sector management reforms and reviewing whether they endure in practice to become embedded. Applied meanings of ‘endure’ and ‘embedded’ are not synonymous, as that evaluation requirement from the MfR era endured for ten years, but did not become embedded (Finance, 2014; Mackay, 2011; TFMI, 1992). This study contributes new knowledge to two current research directions.

The first is the lack of frameworks for implementing reform as a related set of processes that result in effective reform (O’Flynn, 2015). Repeated objectives over thirty years were found in APS management reforms, suggesting such past reforms have not worked and that current theories are insufficient to guide the implementation of the current management reform: the PGPA Act. The second direction is what happens after policy has been decided: whether such policy sticks (Aberbach & Christensen, 2014; Hupe & Hill, 2016; Ilott et al., 2016; Lindquist & Wanna, 2011). By contrast, Howlett has concluded that “policy implementation within the policy sciences remains fractured and largely anecdotal” (Howlett, 2018: p.1). This research contributes to extending implementation theory through the following three further disciplines.

Three research fields contribute to enhancing implementation theory. These are extended time (Pettigrew, 1990); reform change which is effective (Fernandez & Rainey, 2006) and effectiveness shown by evaluation which demonstrates reform achievement (Rogers, 2008; Scheirer, 1987). Two further questions about APS management reform practice were also considered. Could the APS management reform have worked? What does change theory tell us that is not happening? By adopting a cross-disciplinary approach to the implementation of management reform, the following contributions enhance current implementation theory.

7.14 Cross-Disciplinary Approach to Implementation Theory Contributions of this study have been cross-disciplinary, linked by the factor of extended time. Derived from Pettigrew (1990), this factor established that there were similar objectives in APS performance reforms that were repeated over an extended period of thirty years. Their similarity related to demonstrating APS effectiveness: either by advisory evaluation (MfR), or by legally-mandated non-financial performance reports (PGPA Act). This research also contributed to three further research areas. The first is continuing the examination of public sector management reform (Pal & Clark, 2015; Podger, 2018; ’t Hart, 2011; Tiernan, 2011; Uygel & O’Flynn, 2017). The second factor is a wider context to this repeated series, in the Page | 176

policy reform cycle and a desired outcome factor of embedding (APS, 2013a; Barrett, 2014; Finance, 2014; Newcomer & Caudle, 2011; Wettenhall, 2013). The third factor is adopting a cross-disciplinary approach to managing reform implementation. Potentially enhancing implementation theory, those three disciplines are: designing reform through evaluation’s program logic, change momentum/management, and finally evaluating outcomes (Armenakis et al., 2000; Barker et al., 2018; Boston, 2000; Craft & Halligan, 2017; Rogers, 2008). These factors form the following cross-disciplinary assessment of current implementation theory.

Public sector management reform is a continuing research priority. Recent examples have included Aberbach & Christensen (2014), Baehler (2003), Hood & Lodge (2007), O’Flynn, (2015), Pollitt (2018), Pollitt & Dan (2013). Much of this research was derived from UK practice (e.g. Bovaird & Russell, 2007; Hood, 1991; Hood & Dixon, 2015) and not Australian reform practice. Although initial Australian reform practice has been repeatedly analysed (e.g Althaus, 2011; Barrett, 2017; Di Francesco, 2001; Halligan, 2018; IPAA, 2018; Lindquist, 2010), evaluating the lessons from those management reforms has been rare (Breidahl et al., 2017). This study contributes to evaluating whether implementing APS management reforms stick, to endure as embedded practice.

There is emerging research in making policy stick for the long-term. The new factors in that research link this study’s factors of ‘embedding’ and ‘extended time’. These two new factors are exemplified in the United Kingdom examination of “Making Policy Stick. Tackling Long- Term Challenges in Government” (Ilott et al., 2016). A further new factor in implementing administrative reform in the Republic of Ireland was to create a single responsible agency, removing previously divided implementation responsibilities (MacCarthaigh, 2017). By contrast, current Australian practice for implementing the PGPA Act divides responsibilities between the initiating Finance Department (Finance, 2014) and the national auditor: the ANAO (ANAO, 2017b). Further relevant re-framing of implementation theory has included both extended time and managing the successful changes from reform (Barrett, 2017; Newcomer & Caudle, 2011; Pal & Clark, 2015; Shannon, 2017). This introduces two additional factors in implementing public sector management reform: that it is both demonstrably successful and also sticks to become embedded in permanent practice. These were factored into the examination of the research data.

Page | 177

Another new factor from this study is a cross-disciplinary enhancement of management reform implementation. Currently outside implementation theory are the following three factors: evaluation’s program logic (Baehler, 2007); long-term change momentum/management (Barker et al., 2018; Buick et al., 2018; Karp & Helgo, 2008; Pettigrew, 1990); evaluation of outcomes (Blalock, 1999; DeGroff & Cargo, 2009; Durand et al., 2014; Hunter & Nielsen, 2013; Rogers, 2008). There are growing attempts to integrate those currently separate elements, especially into a management life cycle designed to include evaluation (Bourgeois, 2016; Funnell & Rogers, 2011; Hatry, 2013; Kroll & Moynihan, 2017; Scheirer, 2012). Currently, implementation is otherwise regarded in a limited policy process of five-stages: “agenda-setting, policy formulation, decision-making, implementation, policy evaluation” (Howlett, 2018: p.18). That framework has omitted two intervening stages of change management and extended time, in considering whether those policy changes lasted long enough to become embedded. In embedding the changes of effective management reform in APS practice, these are gaps in current implementation theories.

7.15 Cross-Disciplinary Implementation and APS Practice. The challenge of embedding reform changes in APS practice represents a research-practice gap. Part of this gap was bridged in a recent re-framing of reform implementation as change management (Shannon, 2017). This may be relevant to the current review of the APS as to whether it is currently fit for purpose (Thodey, 2019; Turnbull, 2018). A contribution may be found in the links of purposeful program theory in designing change, program implementation and the intended outcomes from the initiation of that management reform (Funnell & Rogers, 2011). This study identified there are differences in long-term meaning between ‘changed’ and ‘implemented’: the APS might be ‘changed’ but effective reform not ‘implemented’. These differences represent conceptual challenges to evaluating the changes over extended time and outcomes from APS management reforms. Although the repeated reform series has been identified, the underlying reasons for its repetition have not been examined (Jones & Kettle, 2003; Hood & Lodge, 2007; Pollitt, 2013). That series was examined by beginning with program evaluation in the 1980s MfR era that lasted about fourteen years (Mackay, 2011). It shows the difference over extended time between ‘implemented’ and ‘embedded’, suggesting the need for a framework which establishes if the APS is achieving its purpose(s), by either routine practice or the embedded changes of management reform.

Page | 178

7.16 Re-Framing Implementation Practice by Evaluation and Extended Time Current implementation theories do not include formal evaluation of the final outcomes, to test their impact. By contrast, the commencement of public sector management reform has been well-researched (e.g. Christensen & Laegreid, 2006; Hupe & Saetren, 2015; Sabatier & Mazmanian, 1980; Saetren 2005, 2014; Van Meter & Van Horn, 1975; Wanna (Ed), 2007). Initial APS reform practice has also been of long-standing interest, both to Australian analysts (e.g. Dawkins, 1985; Holmes & Shand, 1995; Ives, 1994; Keating, 1989, 1990; Moore-Wilton, 1999; Moran, 2013; Sedgwick, 1994, 2011a) and to researchers (Davis & Wood, 1998; Halligan, 2005, 2007; Hood, 1991; Hughes, 1992; Lindquist & Wanna, 2011; O’Flynn, 2015; Rhodes et al., 2008; Yeatman, 1987). This has not resulted in any implementation frameworks that include evaluating the final reform outcomes and their effectiveness.

In practice, there has been only one evaluation of an APS-wide management reform. This was reported nearly three decades ago (TFMI, 1992). Only recently has this absence of evaluation been observed: “we are poor at evaluating reform both theoretically and practically; indeed our lack of attention to evaluation must be one of the great collective failings of public administration……we need to move conceptually to better understand public sector reform and inform the design and implementation of reform practice” (O’Flynn, 2015: p.19). No reasons were advanced for this failure to evaluate. The findings of this research contribute to addressing that failure and the gap in theory, between reform intentions and outcomes in the long-term.

This gap is a factor in implementing public sector reform. The lack of evaluating that reform implementation has been repeatedly observed (Breidahl et al., 2017; Pollitt, 1995; Pollitt & Bouckaert, 2011; Rhodes, 2016; Rhodes & Tiernan, 2015). Pollitt & Bouckaert make an important distinction between reform ‘process’ and ‘outcomes’. The evidence about public sector reform “turns out to be about changes in activities and procedures rather than about actual outcomes; … results are often the elephant in the room for management reforms” (Pollitt & Bouckaert, 2011: p.214). This is one shift in implementation theory: from reform as initial inputs to establishing their demonstrable results. That assessment of Pollitt and Bouckaert (2011) over three decades introduced a key factor of this study: extended time.

Over decades there have been repeated calls for APS management reform to be embedded. These calls have been made by the APS centre, but the framework for achieving this desirable end-state has not been defined, only that reform should be self-sustaining (AGRAGA, 2010; Page | 179

MAB, 1993; MAC, 2010; t’ Hart, 2011). There was no implementation framework in the recent use of ‘embed’ by the APS centre (“Embedding APS Values”: APSC, 2013a). Past APS reforms did not become embedded (Aberbach & Christensen, 2014; Finance, 2014). The practice of APS management reform might be the “puzzle we can never solve” (O’Flynn, 2015: p.19). This study has found that the factor of ‘embed’ has not featured in the outcomes of APS implementation practice, contributing to the challenges of a Secretary’s ability to “make it so”. In APS practice, the links in effective implementation were absent. No implementation theory has been derived from evaluating any Australian management reform practice and establishing any long-term meanings of success from management reform. This research also contributes to enhancing current implementation theory with those two factors, of defining success by evaluation.

7.17 Defining Performance Reform Success through Evaluation. This thesis finds that factors for achieving reform success can be linked with both extended time and evaluation. So far, this chapter has reviewed the place of extended time in effective implementation, drawn together the impacts of the heroic leader as reform initiator and widened implementation factors into the place of middle managers and their teams required for this effective implementation. Later success in implementing management reform can be defined through evaluation, which is now discussed here.

Previous success in the management reforms under review was judged in broad macro labels. These included “Managing for Results” (Keating, 1990) and a high-performing APS which was to be “Ahead of the Game” (AGRAGA, 2010). APS program evaluation was no longer required in 1996 by Government direction, as evaluation had also come to be seen as a negative, leading to agencies losing programs or funding (Mackay, 2011). The cessation of program evaluation was the result of a change of Government in 1996 and commensurate changes in the heads of both the APS and the Finance Department (the initiator of the MfR reform). That 1996 decision relieved APS senior managers of adopting evaluation as a routine internal management practice for demonstrating their program effectiveness (Downing & Rogan, 2016; Funnell, 2000; Guthrie & English, 1997; Hunter & Nielsen, 2013). Interviewees made no observations on this cessation of program evaluation, in relation to the means of managing and demonstrating the long-term effectiveness of the APS.

Page | 180

This effectiveness is again being required over an extended time of four years. Evaluation can contribute to the PGPA Act’s non-financial performance requirements, as “performance information that allows the managers of activities to understand whether the tasks they have been allocated are providing the results expected by senior managers, and if not, why not” (Finance, 2017f: p.11). This distinguishes long-term program performance from the APS’s short-term responsiveness to the government of the day. This difference in management practice over time was considered by interviewees to be the APS’s main current priority. Some more-thoughtful assessments identified that this has resulted both in the APS being over- responsive to Government and also having a short-term focus on its program priorities. This current state of the APS lacks the longer-term focus needed for program evaluations and the evidence of APS effectiveness. The relevance of this study is demonstrated by the recent (re) introduction of program evaluation to support these PGPA Act requirements (Morton & Cook, 2018). An alternative and on-going means of generating information to management about reform and program performance follows. This bridges a gap between the data of this research and implementation theory.

7.18 Information Systems for Effective Performance and Corporate Memory Information on reform performance aids the effective implementation of management reform. This remains a key finding from the 1992 MfR evaluation: “performance information is a key component of making the reforms work” (Bartos, 1995: p.391). Systems to inform management about the progress of reforms and programs have long interested the centre: being both Parliament and the ANAO. In reviewing reform implementation (in this example, of the outcomes/output Budget framework), Parliament concluded: “A practical and informative performance information framework is an integral element of the new outcomes and outputs budget framework as it enables the understanding and monitoring of agency outcomes and outputs. … The Committee is satisfied that the guidance advice Finance and the ANAO provides to agencies is at an appropriate level. However, it is important to determine whether this guidance is adopted or has some other positive outcome” (JCPAA, 2002: p.v). Three of those comments contribute to this research.

Those comments link implementation and outcome in a framework of effectiveness. They are: the importance of monitoring progress on performance; understanding the eventual outcomes from implementation; ensuring that published guidance from the centre has its intended impact across the APS. Even if now seventeen years old, the Committee’s Report is a reminder of the Page | 181

implementation factors of extended time and agency effectiveness by evaluation. That Report also questioned the impact of central guidance on the decentralised APS departments.

The APS has been slow to implement guidance from the centre. Time lags of ten years in APS practice could be seen after the above JCPAA Parliamentary Report of 2002. Despite central Guidance (ANAO, 2011) on developing and using key performance indicators (KPIs), APS progress on performance reporting through those KPIs was shown to be slow and uncertain. Although separated by eleven years, conclusions by the ANAO in 2013 repeat the 2002 comments by the Parliamentary centre: “The establishment and reporting of entity key performance indicators is a fundamental underpinning of the Australian Government’s performance measurement and reporting framework. Key performance indicators are expected to inform entities and government about the performance of programs including their impact and cost‐effectiveness, and signal opportunities for improvements. Key performance indicators also provide the basis for entities and ministers informing the Parliament and the public of the effectiveness and efficiency of government programs” (ANAO, 2013b: p.11). The slow use of KPIs reflects both the momentum of change in practice and the lack of an embedded performance culture in the APS.

The lack of an APS performance culture is current and systemic. In its 2013 report above, the ANAO also concluded: “The assessment of entity performance measurement and reporting identified that entities continue to experience challenges in developing and implementing meaningful KPIs, and that the administrative framework supporting the development and auditing of KPIs remains problematic: (ANAO, 2013b: p.19). The ANAO did not identify any best practice use of a management or performance information system within an agency to monitor the progress of a reform and its actual outcomes. Good central project management practice was lacking. This finding has implications for the on-going information required for advising government and Parliament about the impacts of management reform and any ongoing assessments of reform effectiveness. Importantly, such information may assist in avoiding the repetitive reform series by being the source of long-term corporate memory.

Corporate memory can be supported by management information systems. Such corporate memory and information was lacking in the recent claim in the Review of the Implementation of the PGPA Act that “It is not clear why evaluation practice has fallen away,…”(Alexander & Thodey, 2018: p.14). There was an absence of current memory about the demise of program Page | 182

evaluation that had occurred only some twenty years previously. Such corporate amnesia about past practice and the absence of this memory has negative effects on the APS (Pollitt, 2000; Tiernan, 2016; Tingle, 2015; Wettenhall, 2011). According to one long-serving former APS Secretary of Treasury, this loss has resulted in the APS forgetting how to govern because there was “no one to learn from” (Henry, 2015). This forgetting suggests that one source of learning by APS staff can be corporate information.

Retained corporate information systems provide information about the performance of past reforms. This is achieved through key performance indicators that are actively monitored, reported and used in the management of implementation and measurement of reform and program outcomes (ANAO, 2013b). During this research, interviewees only observed that embedding reform had required either law or significant realignment of senior management capabilities (the formation of the SES). None was able to identify any means of rating management reform as successful and embedding those results in long-term corporate memory.

7.19 Embedding Successful Change Requires Extended Time. Embedding the changes of management reform in corporate practices require decades. This research has linked evaluation with management information systems and corporate memory (or its opposite: corporate amnesia), resulting in a common theme of extended time. The use of extended time in Pettigrew’s ‘longitudinal’ (1990) is also a feature of evaluation in the short, medium and long-term outcomes defined by program logic (McLaughlin & Jordan, 1999). In implementation practice, reform implementation time-frames have varied widely in their meaning, up to thirty years. This study found that there were subjective definitions of ‘completed’ accepted from the APS by the central Cabinet Implementation Unit in the Prime Minister’s Department. That meant there were varying meanings of ‘completed’ and ‘long- term’ in APS practice, although none was offered by former or present APS interviewees. There were no standard definitions of ‘implemented’ found in this study, which identifies the challenge of establishing whether APS-wide management reforms were implemented uniformly and became embedded in the single APS.

This lack of standard definitions reflects on meanings of ‘embedded’ in reform frameworks. Although acknowledged as being a desirable outcome by participants in this study, embedding is sometimes asserted but not always demonstrated (e.g. Hughes, 2016). This was a feature of Page | 183

its single use in recent practice calling for an embedding of APS values, but which lacked an implementation framework for making this happen (APSC, 2013b). No suggestions were made by the interviewees about the administrative or systemic means of embedding reform changes, except through the generic call for their adoption by all APS staff. Also absent in the interviews was recognising the need to influence not just the current staff, but also the future members of the APS to act on those embedded management changes. ‘Embed’ did not feature as an implementation priority with common meanings in the results of this research. These conclusions are relevant to current APS practice and its priorities of implementing programs through the latest reform of the PGPA Act. The absence of implementation links between the initiating and the embedding of management reform are discussed next.

7.20 Gaps in Elements Linking Implementation This study did not find any evidence of APS Secretaries utilising models or formal implementation approaches to bring about either that whole-of-APS or that whole-of-agency embedded change. This study found there were five gaps in the links for implementing management reform. The first gap was between the centre (e.g. the Prime Minister’s Department) and individual APS departments. The second was between an individual agency Secretary and changing that agency’s practice over extended time so that management practice endures. Gap three involved maintaining change momentum within an agency through middle managers bringing along all staff, because achieving successful change requires reform momentum that can fluctuate between acceleration and moving backwards. This study then identified a fourth, geographic gap in maintaining the momentum of change and management reform, which is more specifically explored in the following paragraph.

Reform momentum was affected by this fourth gap of geography. This has not been explored in past research and is a new finding of this thesis. This study found no implementation attention was paid to the geographic distance between the central reforming agents and the APS regions: the locations of those front-line staff who make changes happen. Between those senior managers in Canberra and APS regional staff, this geographic gap confirmed an earlier finding about implementing the ‘managing for results’ reform: that regional staff did not share the same positive conclusions about reform success as Canberra managers (TFMI, 1992). An example of the distance needing this management and uniform implementation impact is the Page | 184

3,800 kilometres41 between Australia’s capital of Canberra and its western-most city of Perth. This distance can represent a potentially incomplete management information link between the centre and its regions, in a framework of maintaining reform momentum and achieving the long-term effectiveness of those management reform changes.

The fifth and last gap relates to managing those changes as an implementation factor in successful reform management. Change management is a staff capability (Briggs, 2007a) which requires both ‘continuous monitoring and evaluation’ (McPhee, 2007: p.xiii). Incorporating evaluation’s program logic into systems of organisational change helps cross theoretical boundaries in “the notoriously fragmented areas of policy, management, implementation, evaluation and even politics” (Baehler, 2007: p.173). Despite acknowledging that reform means change, a model of management reform proposed by Pollitt & Bouckaert (2011) did not include a factor of change management. As the last gap, this absence of effective change management over time challenges existing implementation frameworks. Currently, there is uncertainty about embedding the reforms of the PGPA Act, centering on the difference between an initial implementation framework and the actual results in APS practice (Barrett, 2017). This is despite these management reforms being implemented from the APS centre under a nation-wide legal mandate that should ensure permanent change.

However, there is developing recognition that implementing management reform needs to result in enduring and permanent changes. Current examples include the Australian Government (2018b), the Finance Department (2014), Ilott et al., (2016), Lindquist & Wanna (2015), O’Flynn (2015) and Marsh & McConnell (2010). Implementing reform is being re- framed as the management of change (Shannon, 2017), although this is yet to be recognised as a skill in the leadership of reform (Halligan, 2018). This study has contributed a difference in definition: between ‘enduring’ (continuing for some extended time) and ‘embedded’ (made permanent). There is a need for cross-disciplinary theorising on implementing management reform that results in embedded public sector change. Re-conceptualised as change that endures over extended time, this study re-assesses the frameworks of current implementation and management of reform that is demonstrably embedded. That re-assessment is subject to the following considerations about the limitations of this research.

41 From https://www.sydney.com.au/distance-between-australia-cities.htm (accessed 9/5/19). Page | 185

7.21 Limitations of this Research The assessments of the interviewee comments may have been biased by my lengthy APS employment and various professional contributions to APS reforms. Those contributions were to risk-management strategies in Excise revenue collections, contributions to APS policies on contracting-out functions and staff competencies, developing Health’s program evaluation guidelines, and assessing the accountability of APS agencies in their Annual Reports for the Institute of Public Administration Australia. Those contributions provided me with a variety of insider/outsider perspectives and I was challenged by discovering the difficulties in establishing that a management reform had become embedded. My understanding of current implementation theories and their relevance to Australian practices of governance may have been incomplete, but support my assessments of the need for enhanced and longer time perspectives in such theories. The next section contrasts these findings from practice with their relevance to implementation theory and current re-assessments of implementation frameworks.

7.22 Insights into Current Theory A major finding was the absence of comprehensive frameworks for embedding APS reforms. The initial implementation of those management reforms has been well-theorised (e.g. Christensen & Laegreid, 2006; Hupe & Saetren, 2015; Sabatier & Mazmanian, 1980; Saetren 2005, 2014; Van Meter & Van Horn, 1975; Wanna (Ed), 2007). Equally, APS reform practice has been of long-standing interest to Australian analysts (e.g. Dawkins, 1985; Holmes & Shand, 1995; Ives, 1994; Keating, 1989, 1990; Moore-Wilton, 1999; Moran, 2013; Sedgwick, 1994, 2011b). This has equally been so for researchers (Davis & Wood, 1998; Halligan, 2005, 2007; Hood, 1991; Hughes, 1992; Lindquist & Wanna, 2011; O’Flynn, 2015; Rhodes et al., 2008; Yeatman, 1987). Whether implementation theory leads to embedded management reform in practice has not previously been evaluated. This study contributes to bridging that research- practice gap, with insights for theory from gaps in evaluating APS management reform.

APS management reform has only been evaluated once, in 1992. This absence of evaluating long-term public sector reform has been of some research interest (Breidahl et al., 2017; O’Flynn, 2015; Pollitt, 1995; Pollitt & Bouckaert, 2011; Rhodes, 2016; Rhodes & Tiernan, 2015). No implementation framework has yet been derived from evaluating any Australian reform practice and drawing out the long-term meanings of success. There has also been an absence of research into why such evaluation of actual reform outcomes has not occurred. The evaluative use of ‘embed’ more recently by the APSC (2013a) was a repetitive call for this to Page | 186

occur in APS practice and did not include any methodology. This study has found that ‘embed’ has not featured in the outcomes of APS implementation practice. This results in challenges to an APS Secretary’s ability to ‘make it so’ and to re-framing implementation theory.

There are current challenges to implementation frameworks, for both practice and research. Implementation theories such as Fernandez & Rainey (2006), Leithwood & Montgomery, (1980), Sabatier (1986), Sabatier & Mazmanian (1980), Scheirer (1987) and Smith (1973) have had no discernible impact on APS implementation practice. Several reforms were started without intended impacts, detailed implementation plans, or measures of success being designed. Differentiated terminology has also become apparent, with differences between a ‘reform’ and its implementation, but which later becomes a disconnected ‘program’ for administration. In between are the differences between reform commencement, managing its changes and achieving demonstrable outcomes. These insights address a potential convergence between theories of public management, implementation, change management and evaluation.

In two theories, this has only recently been recognised. An examination of the connections between change management and public sector reform concluded: “the dominant approach to public sector reform in the public management literature addresses primarily change made to organisations, whereas change management literature focuses on change made by or within organisations. Integrating the two viewpoints may deliver a strong conceptual basis for understanding the multi-level dimensions of change taking place in the public sector” (van der Voet et al., 2016b:94). This underpins the shift in reform paradigms identified in this study and the value of evaluating their overlaps, especially in the public policy cycle.

The last stage of the five-stage public policy cycle is evaluation. That cycle is: “agenda setting; policy formulation; decision-making; policy implementation; policy evaluation” (Howlett & Cashore, 2014: p.23). This research introduced a new, sixth element in this public policy framework, of extended time before undertaking that evaluation. Evaluation’s program logic (Figure 5, chapter 3) links with the last stage in Kotter’s change management framework: establishing whether a reform over time has been institutionalised or anchored in a changed corporate culture. This study has also challenged the multiple current meanings of extended time associated with embedding.

Page | 187

These current meanings of embedding range from the specific to the vague. They include success by using effective management systems (Newcomer & Caudle, 2011), the unspecific “need to embed reform momentum” (‘t Hart, 2011: p.209), to being “the dominant way of life” (Fryer & Ogden, 2014: p.1043). There were repeated calls for this embedding in APS practice (MAB, 1993; Tune, 2010; Sedgwick, 2011b, 2014), but with no frameworks being proposed until the introduction of the PGPA Act in 2013. Unlike debates in practice (APSC, 2013b) and research (Newcomer & Caudle, 2011; Pal & Clark, 2015; Pollitt, 2013), the PGPA Act resulted in an embedded reform outcome (Barrett, 2014). This conclusion by a former senior APS officer during the MfR reform and later Auditor-General was absent in interviewee responses made in 2016. Those responses only identified the need to demonstrate outcomes from the disruptions of management reforms and not the management over extended time of those outcomes in practice. These conclusions provide the following insights into current APS reform practice.

7.23 Insights into Current Practice This study is relevant to two current APS priorities involving management change: the PGPA Act and the APS Review. The former has been subject to an evaluation after a short period of three years (Alexander & Thodey, 2018). The latter (Turnbull, 2018) has now resulted in an interim Report outlining further priorities for APS change (Thodey, 2019). They are top-down announcements and it is not clear that the APS has current capacities in implementation and managing reform change. As APS head, the Secretary of PM&C had (re)emphasised the importance of both policy and its implementation (Parkinson, 2016), although this was a single Conference speech without any subsequent application to the APS. Clarification from the Office of the Secretary confirmed that it was a limited application to that agency’s responsibilities for Indigenous Affairs: “In practice, implementation in PM&C is limited to the Indigenous Affairs Group, which has responsibility spanning from policy development to on- the-ground service delivery across Australia”42. This is a restricted reference within one agency and not a resulting management priority for the APS. This study has identified concerns about the effective influence of the APS centre on the devolved and geographically-dispersed APS.

In the APS currently, there are influences on staff that are both supportive and contradictory. These relate to both their geographic location and their long-term career prospects. The

42 Personal email to the researcher, from the Deputy Executive Officer to the Secretary of PM&C (8 /11/16). Page | 188

immediate past Australian Public Service Commissioner, John Lloyd, acknowledged the importance of the APS regional staff, but also indicated a career APS of permanent officers may not be the future: “So many of our staff are leaders. It is not the CEO and top cabal of an agency. It extends well beyond that. It is often the officer working in a regional or remote office…… I expect that there will be fewer career public servants in the future” (Lloyd, 2017b). In the-then Commissioner’s statement, there is a temporal contradiction between the importance of staff as leaders (and co-implementers) but then those same staff not having a long-term career. This risks the APS generating further corporate amnesia and lack of ability to demonstrate long-term, on-going non-financial performance by an agency and its staff. This is a contradiction between short-term responsiveness to the government and long-term, demonstrable achievements from implementing management reforms.

This contradiction is relevant to the factor of extended time identified in this research. The turn-over of senior figures undertaking top-down implementation, coupled with the long-term perspective for embedding effective reform, result in not being able to identify the means of achieving effective reform throughout the APS. This research did identify the importance of a management information system, to address corporate amnesia and its opposite: the corporate memory for demonstrating long-term, effective agency performance. The review of the PGPA Act emphasised the priority of improved reporting of non-financial performance (Alexander & Thodey, 2018). These conclusions contribute to re-framing reform implementation as change management. This responds to the concerns of O’Flynn (2015) for developing a stronger model of public sector reform and to the re-focussing of management reform as the management of persistent change that needs to involve affected middle managers (Buick et al., 2018; Shannon, 2017). The above conclusions now frame the following summary responses to the four research questions.

7.24 Responding to the Four Research Questions The core question was whether the 1980s management reform of program evaluation became embedded in the APS. Closer analysis in this thesis revealed a contradiction in time, about the practice-based assessment in 1992 (TFMI, 1992) that it had, but then its subsequent decay after 1996. This decay reflected a change in Government, accompanied by changes in Secretaries of the Departments of Prime Minister and Finance, to which the APS responded by implementing the next reform priority (accrual accounting). This decay reflected two factors raised in this research. One is the reform series, which has been identified (e. g. Pollitt, 2013; Page | 189

Wettenhall, 2013) but not further researched as to the systemic reasons for its existence. The second is the key factor underlying this research: the need for extended time in evaluating the long-term outcomes of public sector management reform. This core research question was divided into the following three further sub-questions.

The first asked: what is the role of public sector change agents in embedding APS management reform? This research has differentiated between change initiators and later-stage change consolidators. The series of similar priorities in APS reform (figure 20) suggests that many management reforms have been started, but became disrupted and not embedded in APS practice and corporate memory. The management reform of program evaluation was taken up by specialised staff, who were lower-level consolidators of that reform in practice. When formal evaluation was no longer required, these staff and their skills were no longer utilised, suggesting that this management reform could have worked had those staff been retained and their reports integrated into standard management practice.

The second sub-question asked: how can change management frameworks explain the challenges of implementing APS reform policies? The short answer is: only partially. The last stage of the change management framework used in this thesis is institutionalisation (Kotter, 1995). That final stage in Kotter’s framework was recommended in some APS practice (e.g. MAB, 1993; Metcalfe, 2007), but is notable for its absence in the separate field of public management research (Osborne, 2017). A new challenge is to differentiate between short-term and long-term change management. Current frameworks could be expanded by maintaining the momentum of reform change over extended time, through a new agent of change consolidator. This highlights the new insights gained in this inter-disciplinary research.

The third asked: what insights might be learnt from applying a lens of extended time to implementation theory and to examining how reforms endure? This thesis found that long- term implementation of effective reform was vulnerable to the relatively-limited tenure of those initial Secretaries commencing that management reform. It requires decades to change management practices and make them stick throughout a large and geographically-dispersed organisation like the APS. This theme of ‘stick’ is a developing research interest (Hood & Dixon, 2015; Ilott et al., 2016; Lindquist & Wanna, 2011). This thesis identified a gap in terminology between ‘implemented’ and ‘embedded’, in defining the outcomes of management reform over extended time. Implementation theory lacks a long-term factor of formal Page | 190 evaluation, as to their effectiveness in achieving the original objectives. This effectiveness of past reform outcomes needs to be retained in corporate memory and be available to future APS staff, avoiding the repetition of the reform series. These findings suggest further areas of research into demonstrably linking the initiation of management reform with the effectiveness of its eventual outcomes over the long-term. The following insights into such cross- disciplinary linkages consolidate these findings.

7.25 Insights into Cross-Disciplinary Linkages There is a need for an implementation framework involving extended time to achieve ‘embedding’. This link between reform intent and later outcome is regarded as a “slippery concept” (Pollitt & Bouckaert, 2011: p.126). Nevertheless, it has become an emerging research interest (Hood & Dixon, 2015; Pal & Clark, 2015; Pollitt, 2013; Pollitt & Dan, 2013). This study remains relevant to APS practice, where “implementation remains a critical issue for governments of all persuasions and a central concern of the governance narrative” (Rhodes & Tiernan, 2015b: p.91). As an exemplar of the extended time identified in this study, the implementation of that MfR reform of program evaluation proved transitory and was evaluated in too-short a time span: of some ten years afterwards (TFMI, 1992). That requirement for program evaluation did not survive the change of national government in 1996 and the consequent linked change in the management priorities of heads of agencies.

That 1992 evaluation concluded the APS had developed an evaluation culture. This conclusion was, however, made by a senior group (the Secretaries of the Management Advisory Board) about their own implementation practices within their agencies, based on the requirement for Portfolio Evaluation Plans by all agencies. Separated by seventeen years, two different conclusions came to be made by the same individual about that evaluation culture: that it was embedded (Sedgwick, 1994) and that it was not (Sedgwick, 2011a). This was a personal demonstration of the perspective of extended time in reflecting on reform implementation.

Implementation theory can link with extended time for managing change and evaluation. Evaluation is the long-standing means of demonstrating the effectiveness of a reform: “Program evaluation and performance monitoring are indispensable components of any effective program of public sector reform”. (O’Faircheallaigh & Ryan, 1992: p.xi). This links initiating the implementation of reform with its later evaluation, to establish the effectiveness of those reform changes. Kotter’s theory of change (1995) has not been generally applied in Page | 191

the public sector and his eighth stage of change management (‘institutionalise’) offers no clues as to how change agents could achieve this in dispersed and devolved public sector entities.

By contrast, this research identified reform being vulnerable to top-down change agents commencing implementation, but not even achieving Kotter’s stage seven: consolidating improvements and producing further change. This distinguishes between change initiators and consolidators, for bringing about later systemic and embedded management reform in the long term. Existing implementation frameworks were contrasted with the practices of implementing management reform and APS staff experiences at management and operational levels. By adapting the evaluator’s long-term perspective on effectiveness to ask what would success look like, a factor of extended time was identified. Claims of long-term success in the case study of evaluation and ‘Managing for Results’ could not be sustained. This highlights a gap in implementation theories and contrasts with the last stage of change management theory, of institutionalisation or embedding. These insights are the bases for the following conclusions.

7.26 Conclusions This study challenges assumptions that APS Secretaries can currently implement and achieve management reform. Despite Secretaries being the managers of APS agencies and responsible for achieving its objectives through management and staff, this thesis found that their span of actual impact does not necessarily lead to sustainable management reform. The former Secretaries interviewed had not embedded the reform for which they were responsible and were unable to identify any systemic means of making this happen. They were aware that their successors could and did change these reforms, creating and disrupting the APS reform series. Linked factors in that series were changes at the top (in Governments, Ministers or Secretaries), not evaluating the outcomes of previous reforms for justifying further reform, plus a lack of change from past reforms becoming demonstrably embedded for the long term in APS practice.

This study of APS practice provided a contrast with existing implementation frameworks. It drew upon the reform experiences of APS staff at both management and operational levels, providing the perspectives of reform initiators and the subsequent managers. A factor of extended time was identified in this contrast, by adapting the evaluator’s long-term perspective on effectiveness and asking what would management reform success look like. Claims of long- term success in the case study of program evaluation could not be sustained. This highlights a gap in implementation theories and contrasts with the last stage of change management theory, Page | 192 of institutionalisation or embedding. This thesis makes four key contributions to new knowledge from practice. Implementation was by the top-down initiation of reform by Government. There was a lack of long-term planning in the initial commencement of reforms. The continuation of a reform was vulnerable to changes in key people or governments. While geography can be a taken-for-granted implementation factor, it was found to have been overlooked in planning by the centre in relation to how long a reform needed to be conducted to change practice in those geographic regions. These contributions assist in explaining why that MfR reform of program evaluation was not embedded.

To examine such embedding in practice, this research drew upon four theories. These were implementation (of public sector management reform), change management, adapting the factor of extended time from organisation theory and the evaluation of embedded reform impact. Many of the interviewees who were past and present APS members made negative conclusions about reform impact, concerning the ever-turning APS reform series. Such series was shown in this study to exist in practice, but have only been acknowledged and no solutions offered. This was found to have a negative impact on maintaining the agency-wide momentum of change for achieving long-term outcomes from initiating reform. By integrating for the first time, implementation with assessing the effectiveness of change management, being linked with extended time and geography, the following eight new and linked factors could enhance existing implementation theory.

As new knowledge contributing to current implementation theory, these eight factors were derived from the APS practice examined in this research and are key factors to understanding embedding public sector management reform. They are: (1) short-term, continuous APS reform series disrupt long-term implementation of a single reform; which is also affected by (2) lack of continuity in the APS head; (3) a management perspective of extended time is required, both initially and subsequently; (4) involving all staff, especially middle managers to complete implementation; (5) the need to design evaluability into management reform; (6) information systems that report to management on those outcomes, providing corporate memory on both past and present reform outcomes; (7) any intended implementation impact of the centre needs to be realised across the geographically-dispersed APS; (8) maintaining the momentum of changes from long-term reform so they become embedded. These relationships between these eight factors can be demonstrated in the following jigsaw.

Page | 193

This jigsaw of Figure 23 has been constructed to highlight those cross-disciplinary factors identified in this research as being relevant to embedding public sector reform: implementation, change management, extended time, evaluation. Figure 23 The Jigsaw of Effective Implementation

The Challenges to Implementation Theory PUBLIC SECTOR REFORM CYCLE

REFORM ASSESSMENT Commenced by Change REFORM Initiators EMBEDDEDNESS

OBJECTIVE DESIGN Reform EFFECTIVE Reform Changes Implementation Institutionalised REFORM Logic for (Implemented of Changes over Long Term) LONGER-TERM Outcomes EMBEDDED

Change Management = Started

Change Managers Achieve Impact on Current Staff with SHORT-TERM Reform Changes.

Although fragmented across four currently distinct fields of implementation, change management, organisation research and evaluation, these new factors were identified as collectively contributing to achieving embedded public sector management reform. Studying them offers potential paths forward. On the left side of that jigsaw are three factors in broken Page | 194 lines, being new contributions from this research. In total, the elements of that jigsaw currently do not fit together, as no single research framework encompasses them. They have been drawn this way to indicate possible future research into linking them.

The study examined only the management reform of program evaluation, since it demonstrated a fixed period of some ten years in its use by the APS and functioned as a meta-component of implementation and reform. The literature of implementation lacks a component which evaluates the success of a management reform’s actual outcomes in change practice. Pettigrew’s use of ‘longitudinal’ highlights the place of extended time that could complement implementation theory. This thesis finishes with the next chapter 8, which outlines findings from the disciplines of implementation, change management and evaluation, while summarising their significance. This last chapter also considers the main implications of this research for both theory and public sector practice.

Page | 195

Chapter 8 Embedding Public Sector Management Reform

8.1 Introduction Whether program evaluation, as an exemplar management reform, was embedded in APS practice was the main focus of this thesis. A secondary focus was the ability of an individual APS Secretary to embed such management reform. Such program evaluation was a planned outcome of the 1980s reform titled “Managing for Results”, or MfR (TFMI, 1992). This research also emerged from the absence of “robust conceptual models to understand public sector reform” (O’Flynn, 2015: p.21). In re-assessing this quandary, Shannon (2017) noted that such reforms involve practice-based change and suggested the analytical lens might be more productively re-focussed from ‘public sector reform’ to ‘change management’. By examining the influences on whole-of-agency APS change management and whether they were effective in achieving permanent management reform, this thesis contributed to that re- focussing. This was achieved by adopting an enriched cross-disciplinary approach from three fields: change management, organisation research and evaluation theory. Those three fields contributed the following factors to enhancing current implementation theory: (1) change management: change consolidators, change institutionalisation and embeddedness; (2) organisation research: extended time and maintaining reform momentum; (3) evaluation theory: designing reform to be evaluatable; capable of being assessed later as to the effectiveness and embeddedness of that reform. Key theoretical findings as outlined in chapter seven included the lack of a comprehensive framework for initially implementing but then managing the changes of those management reform, over sufficient extended time, to demonstrate permanent reform and achieve the last stage of embedding. This chapter draws together the thesis findings and discussion, to highlight the five areas where contributions are being made to public sector implementation theory and practice.

A case study of the APS was utilised to examine the implementation of public sector management reform by Secretaries as heads of APS agencies. The research emerged from increasing evidence of poor implementation of management reform. When examined over extended time of decades, the thesis challenged assumptions that management reform changes can be implemented by the Secretary’s top-down direction (Barrett, 2017; Halligan, 2018; Podger, 2007; Rhodes, 2016). As outlined in chapter 2, these assumptions were created by the 1980s ‘Managing for Results’ reform that devolved APS management responsibilities from the centre to individual Departmental Secretaries. This thesis found differences in both theory and Page | 196

APS practice, between initiating top-down management reform and demonstrating any later, embedded impact. This chapter presents the new understandings of factors in implementation theory through these original contributions, by explaining these five factors of: reforming the single APS through its Secretaries, the geographic dispersion of the APS as a factor in change, change consolidation being undertaken over extended time, evaluating the effectiveness of management reform, and embedding the reform changes in permanent practice. The first contribution is the gap between the reform intentions of the APS centre and its constituent departments as managed by their individual Secretaries.

8.2 Reforming the Single APS through its Secretaries The first contribution was identifying the gaps in effective implementation, between the APS centre and the eighteen Departments. Top-down change initiated from the APS centre (such as the Finance Department in Canberra) was found not to lead to long-term and uniform management reform across the large, dispersed APS, because the reform change was implemented through the discretion of individual Secretaries. Although the APS is regarded as a single entity under section 3 of the Public Service Act 1999, in practice individual Secretaries drive their own reforms and manage their individual agency responsibilities around Australia. This devolution of management responsibilities to those Secretaries was intended in the earlier MfR reform (Keating, 1995), but managing the APS as a single entity has been a recurring objective in APS policy. Recent examples have included “MAC is committed to a single SES across a single, devolved APS” (MAC, 2005: p.1) and the current Review of the APS as to whether it is fit for purpose, where “[W]e need a trusted APS, united in serving all Australians” (Thodey, 2019: p.23). This research supports the recent observation that “devolution had gone too far, fragmenting government, that a more whole-of-government approach was needed” (Podger, 2018: p.120). These findings reveal that there is an absence of effective central co-ordination of reform intended to embed management changes in the single APS managed by those Secretaries.

Those Secretaries’ attention, priorities and maintaining the momentum of an individual reform were diverted by the repetitions of the short-term APS reform series. This series was also found in this thesis to be affected by changes in the occupant of the Secretary of the Prime Minister’s Department (as APS Head) which consequently changed the priorities of Secretaries. This was a tension identified between Secretaries’ responsiveness to those changes in the APS head and their individual responsibilities to manage their Department. By contrast, Page | 197

this research found that the influence on reform implementation of those top-down change agents (individual APS Secretaries) cannot always be sustained. Certain of the APS Secretaries within the time span of this study (1984-2014) were interested in continuing reform, while others were not. The reasons of the latter Secretaries could not be established, being derived from confidential observations by some of their peers who were interviewed. This identifies a new factor in the impacts of reform implementation, of top-level changes in individual occupants of key positions which contribute a practice-based factor to implementation theory.

A key finding of this thesis was the theory-practice gap, between implementing management reform as an ideal in the literature compared with current public sector practice. Former Secretaries agreed that the permanent outcome of embedding was the ideal objective of implementing management reform, but could not identify any model in practice that would achieve this throughout either the single APS or a department. This matters because of the responsibilities of the APS Head and departmental Secretaries to “provide leadership, strategic direction and a focus on results for the Department”43. They are responsible for both initiating management reform and managing their agencies to be efficient and effective, but the continuation of reform was found to be vulnerable to the on-going presence or absence of those Secretaries. A further gap in central reform frameworks was found in interviews to be that geographic dispersion around Australia of most APS staff. This affects the APS centre’s influence on those Secretaries and their geographically-dispersed staff, as the recipients of the changes from management reform. This identifies two new implementation factors of geography and management information about reform outcomes.

8.3 Geographic Dispersion and Performance Information New practice-based implementation factors identified were geography and performance information. Whether it is possible to reform the single, geographically-dispersed APS and to establish that this has occurred were examined in this study. As set out in chapter 2, this thesis has distinguished between the APS centre, its eighteen constituent Departments and the geographic dispersion of most of their staff outside the centre of Canberra. The need for past management reforms of the single APS has been taken for granted in their initiation and implementation, without regard for its geographic dispersal and the recording of uniform information on reform impact amongst its regional staff. This had earlier been an incomplete

43 Under Section 57(2)(d) of the Public Service Act, 1999: https://www.legislation.gov.au/Details/C2019C00057 Page | 198 factor in implementing the ‘Managing for Results” reform forming this case study. The thesis revealed that implementing effective and embedded management reform cannot be taken for granted in a devolved APS, where sixty-two per cent of its staff are located outside the influence of senior APS managers in the capital city of Canberra. Those central office managers of implementation need ongoing information about reform progress and results in the regions, especially as 57 (2) of the Public Service Act 1999 mandates that one of the Secretary’s four responsibilities is to: (d) “provide leadership, strategic direction and a focus on results for the Department”. No ongoing information systems were identified in this study. The links between reform performance, management information systems and reforming the single APS through its Secretaries follow.

Information about the effectiveness of management reform over the long-term was lacking. During implementation, information about outcomes and measuring the progress of reform needs to flow to Secretaries as the central decision makers. There are long spans of management influence and information flows that need to be maintained between Canberra and (say) Darwin (3,900km) or Perth (3,700km44). This is of current relevance, as APS agencies have difficulties in developing and implementing the basics of uniform national performance: Key Performance Indicators (ANAO, 2013b). Performance information about reform results also provides feedback on APS accountability, to Parliamentary committees such as the Joint Committee on Public Accounts and Audit and to the ANAO. These accountability centres are complementary factors supporting the insight of this thesis: the importance of information about progress on performance and the eventual outcomes from implementing reform. These are relevant to this study’s theme of embedding reform, which was also found to require extended time to have impact and be considered effective.

8.4 Extended Time to Embed Effective Reform This contribution links the desired reform outcome of permanent change with the extended time required to achieve this. As the last stage of leading change (Kotter, 1995), institutionalisation (or embedding as permanent practice) was found to be absent in APS practice by all agencies. This absence matters because embedding public sector change can require several decades (Hood & Dixon, 2015; Pollitt, 2013), where extended time becomes a factor in planning management reform which effectively achieves its objectives. Such time is

44 Both distances sourced from https://www.distancecalculator.net/ (accessed 13/6/19) Page | 199

the temporal link between commencing a management reform and its systemic continuation throughout the APS as a single entity, especially for its uniform effects within each of the eighteen departments of state. This study also found that there were three critical differences of meaning in APS reform practice: between (1) ‘implementation’, (2) ‘implemented’ and (3) ‘embedded’. These differences were time-related, between beginning reform, then rating it completed but only in the short-term, and whether the reform changes became embedded over the longer-term of decades. Effective management of reform thus is linked with maintaining its momentum over extended time.

Over that longer-term, this research found there was incomplete implementation of APS-wide reforms. The changes of the MfR reform were assessed after ten years, but whether those changes lasted long enough to become embedded was not evaluated by the initial reform agents, the APS Management Advisory Board. Even after those ten years, APS staff in the regions (where most are located) were not fully-convinced of that reform and “future reform effort could more successfully involve regionally-based program delivery staff” (TFMI, 1992: p.489). Given such geographic dispersion of staff, extended time is required in order to maintain the momentum of reform and embed its changes in their practice. This was exemplified in the following two APS reforms being disrupted by changes at the top. Forming part of this case study, the MfR reform of program evaluation was considered implemented by 1992 (TFMI, 1992) but was no longer required only five years later by 1997, after a change in Government (Mackay, 2011). Commissioned by Prime Minister Rudd in 2009, the general APS reform “Ahead of the Game’ was published in March 2010 (AGRAGA, 2010), but came to be “subject to the policy priorities of the Gillard government (Sedgwick, 2011b: p.75). Ms Gillard had become Prime Minister in June 2010 and the ‘Ahead of the Game’ “reforms either fizzled out within 24 months or fell foul to a change in government” [in September 2013] (IPAA, 2018: p.23). Implementing reform was disrupted by top-down changes in priorities over the short-term.

These changes in top-down priorities resulted in the use of ‘implementation’ being focussed on the initiation of management reform. Later conclusions of ‘implemented’ after ten years were made too early and were found not to be sustainable over an extended period of several decades. These differences in meaning over time are relevant to embedding effective reform, in both theory and APS practice. Embedding has been theorised as an ideal outcome of public sector management reform (Lindquist & Wanna, 2011; Pal & Clark, 2015), but has been neither Page | 200 operationalised in practice nor further researched. With the new finding of extended time, this study makes a further contribution which enhances the implementation framework and outcome of embedded reform, through the new role of change consolidator outlined next.

8.5 Maintaining Reform Momentum through Change Consolidators This thesis found the potential long-term consolidators and embedders of that MfR change were absent. This connects with the implementation factor of reform momentum being maintained over extended time, as losing reform momentum is a danger in implementing long- term and effective reform (Morrison, 2014; Wilenski, 1986). In that single APS, this study found that tensions in implementing effective reform included maintaining the momentum of that reform change. This thesis found that such reform momentum was disrupted by the following three influential changes at the top: in government (by election); in the APS head (the Secretary of the Prime Minister’s Department); in individual APS Secretaries. Because of one or more of those three changes, the influence of the initial change agents was found not necessarily to be for the long-term. When those initial leaders of management change moved on, this was found to result in a waning of interest by APS managers in maintaining that reform. This lead to the vulnerability of a management reform (not) continuing or becoming embedded, by those changes in key personnel and their consequent absence.

This study found there was some awareness of those top-level absences and reform changes by their successors. As a change agent of long-term reform, the ability of an incumbent Secretary unilaterally to ‘make it so’ could not be demonstrated in this study. There was only limited understanding of the role of middle managers as the later-stage consolidators of change and this thesis identified three different actors in implementing reform over time. These different actors are associated with the corresponding stages of reform: first, the senior reform initiators; second, the later middle managers of change, third, the eventual consolidators of change that is effective in achieving the original reform objectives. Initial senior initiators of reform did not necessarily achieve long-term change, which needs further management within an agency of the change to become demonstrably embedded in permanent practice.

As a result, missing was a system for maintaining reform momentum in the absence of those top-down reform initiators: the APS Secretaries. This thesis found that embedding long-term reform changes was vulnerable to changes in those first-stage, initiating personnel. These changes result from their retirement, promotion or transfer, as exemplified in the turnover of Page | 201

the APS head (Secretary of the Prime Minister’s Department) shown in figure 18 of chapter 7. Since 1986, turnover in this position has increased and the periods of occupancy have been reduced, from over five years to three. This may influence the corresponding reform priorities of the APS Secretaries, as their short-term responses to that APS head. When such senior change initiators left, this thesis found a gap developed in the last stage of anchoring a change. This waning of reform momentum lead to identifying a different role of later-stage and lower- level change consolidators.

These change consolidators would maintain the momentum of reform to its eventual impact and embedding afterwards. Change consolidation can contribute to the framework for managing change comprehensively throughout an APS agency, otherwise found to be absent in this research. The typical structure of an APS agency (figure 2, chapter 2) shows there are descending levels of senior management authority (the Senior Executive Service). This means that those middle managers are also important consolidators of implementation (Pollitt, 2015) and change (Buick et al., 2018; Van der Voet et al., 2016a). Such change in the APS is driven via those middle managers at the Director (EL 2) level in head office and regional offices, who were eight per cent of the APS in December 2018 (APSC, 2018b). This study found there were differences between who initiates management reform and who achieves embedded impact. Some of these differences relate to managing the intervening changes of that reform.

There is no model of the effective implementation of a public sector management reform so that it lasts over extended time and also results in being embedded in an agency. This study contributes to emerging research re-focussing public sector reform from its implementation to the subsequent management of its changes (Shannon, 2017). By changing the research lens from ‘public sector reform’ to ‘change management’, this thesis found there were differences in implementation responsibilities over time relating to positions in the APS. Below the senior initiators of reform change (APS Secretaries), these positions are the subsequent lower levels of middle managers and staff, who are required to implement and consolidate the changes of that reform. Maintaining implementation momentum requires both the initiator of reform and the active involvement of agency staff (Barker et al., 2018), as the middle managers of those staff are now acknowledged to be significant agents of change (Buick et al., 2018). This thesis adds to a recent observation that “how reform travels within systems is still mostly a mystery” (O’Flynn, 2015: p.20). This contribution highlights the incomplete role of Secretaries in Page | 202

starting management reform at the top, compared with the later roles of both maintaining the momentum of change and the consolidation needed to achieve effective reform.

To achieve such effectiveness, this study found a major distinction in implementation practice over time: between the short-term (of commencement) and the longer-term (embedding). This thesis finds that implementation theory can be enhanced by the place of change consolidators leading to the later stage of embedding, which may require extended times of decades to result in permanent changes of practice in each agency of the large and decentralised APS. By contrast, ‘consolidation’ is currently employed in a different sense relating to the unified financial statements of decentralised public agencies (Bergmann et al., 2016; Hyndman & Liguori, 2016). Its use here as a public sector position in implementing management reform identifies new fields of research in both implementation and change management theories, especially their potential linking through the new factor of extended time. The absence of a model of implementation over extended time challenges whether management reform can be effective throughout that single APS, as established through formal evaluation of the later outcomes of reform. The findings of absent links between implementation and effectiveness are outlined next, by drawing on the field of evaluation.

8.6 Management Reform not Evaluated There was an absence of establishing the effectiveness of management reform by evaluation. This study introduced that factor of ‘embedded’ (made permanent in practice), into examining the initial frameworks and eventual outcomes of implementing reform. This thesis has found there were repeated requirements for effective agency performance in the APS reform series spanning some twenty-five years. That reform series began with the ‘Managing for Results’ reform of 1986, ended in 1996 and returned with demonstrating non-financial APS performance under the Public Governance Performance and Accountability Act 2013. This series highlights the theme of this thesis of ‘embedding’, meaning a once and for all change. Planned evaluation was absent as a factor in initially designing APS management reform intended to achieve embedded impact. Achieving objectives can be demonstrated through evaluation as a key tool for management. It contributes to “the ability of program managers to make informed, evidence-based decisions in close to real time” (Downing & Rogan, 2016: p.1) and helps to “discover how well programs and policies work, under what circumstances, and with what results” (Newcomer & Brass, 2016: p.94). Despite interviewees acknowledging the value of embedding, no frameworks were identified from either practice or theory. Page | 203

This is important, as it suggests the initiation of a management reform has been a higher priority than ensuring it achieved its objectives of change. This is significant for designing effective APS management reform, because of the repeated reform series and the lack of lessons learnt from past practice. These demonstrate a supplementary insight of this study: a lack of learning from the past about implementation best practice leading to policy and administrative amnesia. Such amnesia has two components. One is the repeated introduction of similar management reforms, but which have each been considered unique by those successive initiators. The second is the introduction of similar reforms, under different names but with similar performance-related objectives: MfR, 1980s; “Ahead of the Game”, 2010 (figure 20, chapter 7). The latter was associated with short-term occupancies in Secretaries or governments, which have been recent features of both the political and administrative sectors in Australia45. This highlights the new factor of later-stage change consolidators, in embedding effective reform.

The lack of evaluating long-term management reform results in a lack of evidence about past practice and what works in changing APS practice by embedding that reform. This thesis began with one meaning of public sector reform as “deliberate changes to the structures and processes of public sector organisations with the objective of getting them (in some sense) to run better” (Pollitt & Bouckaert, 2011: p.2). That means reform involves change to organisations that is eventually rated to be better, but omits time and change agents. By examining the influences on whole-of-agency APS change management and whether they were effective in achieving permanent management reform, this thesis contributed five factors to implementation theory.

8.7 Contributions of this Study to Theory This thesis makes original contributions to implementation theory, by drawing upon insights from three other fields of change management, organisation research and evaluation. By utilising those three fields in this research, it was possible to examine what was missing from a model for implementing effective management reform that is sustainable. Those cross- disciplinary fields contributed the following insights of: a new function of a change consolidator maintaining reform implementation momentum; over extended time potentially of decades; so that the reform is capable of being evaluated as to its later effectiveness and

45 On personnel change in these sectors, Kamener (2017) noted “in the four years since the coalition government came to power in Australia [in 2013], there have been 42 changes of minister and 19 changes of secretary”. Page | 204

embeddedness. Effective management of reform change was found to be more extensive than the initial change agents currently initiating management reform: APS Secretaries. This study contributes new evidence to the concern that “research on the evaluation of large-scale public- sector reforms is rare” (Breidahl et al., 2017: p.1). This was achieved by applying an evaluation framework to current implementation theory, resulting in new implementation factors of change management that is consolidated over extended time and is demonstrably effective in embedding reform outcomes.

The first contribution identified a later implementation stage of consolidating reform changes. By adapting ‘extended time’ from organisation research, maintaining the momentum of reform change and its consolidation becomes an adjunct to implementation theory. Compared with the reform/change initiator, the different function necessary to consolidate agency-wide reform over extended time of decades has not been examined in past research. Although change management is a senior leaders skill (APSC, 2014a; 2014c), this potential new role of change consolidator represents a field of future research for its relationship to achieving embedded reform.

Embedding reform can require decades of change. The second contribution of extended time was adapted from two fields: conducting longitudinal research (Pettigrew, 1990) and evaluation’s program logic which emphasises short, medium and long-term outcomes. Current implementation frameworks lack the factor of extended time identified by Pettigrew. This factor is important in maintaining reform momentum throughout large and dispersed organisations such as the APS, especially by the three levels of short-term change initiators, medium-term change managers and longer-term change consolidators. That factor of extended time is also necessary to de-couple senior management attention from the short-term priorities of government, changes of Ministers and associated changes in Secretary. This identifies the significance of separate change agents, maintaining reform momentum over that extended time and geographic dispersion of the APS, in consolidating reform change. This represents the last stage of institutionalising change and reform outcomes.

Implementation theory has not included evaluating the outcomes of reforms, despite recognising on-going turns of the reform series exemplified in figure 5 of chapter 2. Past analysis of this reform series has only identified its existence, but not any underlaying factors in its continued repetition. Although “theory has not given much in terms of understanding Page | 205

these dynamic processes or whether, in setting reform agendas in train, we ever achieve our goals” (O’Flynn, 2015: p.19), no Australian researcher has evaluated his/her writings on the initiation of reform, as to the later outcomes of those reforms. The factor of extended time connects implementation and the last factor of evaluation, through assessing reform results potentially over decades. By drawing upon theories of both evaluation and organisational change over extended time, this finding contributes to enhancing implementation theory. In implementation theory, less attention has been paid to extended time, compared with its critical place in the initial design feature of evaluation’s program logic (figure 5, chapter 3). Using program logic to design reform creates a pathway for comparing initial management reform intentions with their eventual outcomes. For this comparison to be undertaken, this thesis found that an elapsed time of ten years was necessary but not sufficient to achieve embedding. A gap was found in this reform momentum, from the absences later of the initial change agents: APS Secretaries.

These were factors disrupting the implementation of effective management reform. Those time-related factors were the short-term occupancies in both the APS head and other agency Secretaries, affecting the maintenance of management attention and corresponding reform momentum. This has contributed to the increasing examination of why public sector reform does not stick (e.g. Ilott et al., 2016), by evaluating why the 1980s ‘managing for results’ reform of program evaluation only lasted for some fourteen years between 1982 and 1996. Earlier claims of embedding that reform were later disrupted and could not be substantiated by reference to existing theory, highlighting some limitations current in research frameworks.

Limitations were found in the factors of implementation, relating to change, extended time and effectiveness. The analytic framework of this study drew together the separate domains of implementation of public sector management reform, organisation research, change management and evaluation theory. They each contributed to answering the question on whether change management theory could guide enduring public sector reform as: partially. Current research on change management in the public sector emphasises the significance of top-down leaders who are expected to change their agencies. This does not address the personal impacts of such change leaders, especially their later absence and whether the outcomes are successful (Kuipers et al., 2014). This factor of success is relevant to current practice in the APS and internationally (Moynihan & Beazeley, 2016) and demonstrates the lack of both evaluation and the factor of embedding examined in this research. Page | 206

This study contributes to research evaluating long-term outcomes of public sector management reform. By drawing upon combinations of change management, extended time and evaluation, this thesis shifted the focus from initial implementation to establishing later reform success. Implementation theory is under review, to understand the achievements of public sector management reform as to whether they stick. Implementing management reform presently lacks connected pathways with change theory and evaluation, to aid this understanding. By drawing upon APS reform practice, this study has identified issues of insufficient implementation in the longer-term, with limited information being gathered regarding long- term performance and effectiveness.

A key finding of this thesis was the gap between embedding management reform as an ideal compared with current public sector implementation practice. Former Secretaries agreed that this permanent outcome of embedding was the ideal objective of implementing management reform, but could not identify any framework that would achieve this throughout either the single APS or a department. Partially, the next conceptual steps have been developed by re- considering reform as processes of change (Shannon, 2017), although this is yet to include whether the outcomes of that change do result in effective reform. This long-term attention by implementing management features in the extended times of evaluation theory, as incorporated in the short, medium and long-term outcomes of program logic. The long-term implementation of a reform was found to be challenged by the short-term priorities of the government of the day, especially when reflected in changes of Ministers or Secretaries in the eighteen Departments of the APS. Examples of the short-term occupancies of those senior positions over six years occurred in recent Australian practice, where “under the previous Rudd/Gillard government [between 2007-2013] there were 36 changes of minister and 22 changes of secretary” (Kamener, 2017). This negative example further illustrates the relevance of the long- term influence of an individual Secretary in making a reform stick, to become embedded in permanent practice. These findings suggest that further research is needed into re-framing management reform implementation theory, from initiation to becoming embedded demonstrably in the long term.

8.8 Contributions of this Study to Practice This study examined the assumption that management reform would be embedded. The contributions of these research findings to theory have been identified and were extensively Page | 207

summarised in the preceding paragraph 8.7. They contribute to the contemporaneous Australian concerns that “we do not have very robust conceptual models to understand public sector reform” (O’Flynn, 2015: p.21). Those findings from practice contribute to the conclusion that no single implementation model currently contains those planned and verifiable links between top and bottom, especially any relating to either geographic dispersion or the extended time required to evaluate the outcomes of effective management reform that becomes permanently embedded in APS practice. The main assumption was that management reform initiated from the top by APS Secretaries would result in uniform and permanent impact throughout all agencies. This thesis found those reform initiators (Secretaries) do not necessarily achieve embedded change, as this requires an enhanced mixture of reform design, planned implementation, change management with reform momentum maintained for the longer-term and an evaluative mind-set.

By utilising an evaluation framework to assess implementation theory, this study introduced a new factor of program logic into the design of implementation practice. That framework of program logic contains three time-related factors about evaluating outcomes: short, medium and long-term. Those assessments of the ‘Managing for Results’ reform demonstrate the significance of extended time: for example, compare the early positive assessment of Sedgwick (1994) with his later negative conclusion about its absence (Sedgwick; 2011). The extended times needed for implementing effective and embedded reform and the intervening element of geography are missing links that this study identified in current implementation practice.

This thesis makes four new contributions to APS practice. These are that change is necessary over extended time and momentum is maintained by the APS centre across the single APS, in order to be demonstrably effective. Implementing management reform in the APS has not taken place over a sufficient long timeframe, nor revealed an appropriate implementation model (by evaluation) that would determine if those reforms achieved their desired intent. This finding can be placed in the context that the APS lacks skills in outcome-focussed strategies and managing performance (APSC, 2013f). The APSC’s examination was a lengthy evaluation of the APS over five years, of its fitness for purpose in the years 2011-2016 (APSC, 2016). Those conclusions by the APSC remain relevant to the current review of the APS as to whether it is fit for purpose (Thodey, 2019; Turnbull, 2018). The following contributions add to the new direction suggested by Shannon (2017), re-focussing implementation practice from Page | 208

‘public sector reform’ to ‘change management’. They challenge existing practice-based assumptions about implementing reform.

The first assumption was definitional. The meanings and priorities of reforms from the centre were assumed to be understood and accepted by staff in the regions. The second assumption was implementation effectiveness: that those reforms would be implemented uniformly throughout all APS departments and their regional offices, in the same manner as intended by the centre. Four gaps in practice were found: between the centre and departments; between Secretaries and APS staff; between initial change agents and later-stage change consolidators; plus between short-term initiation of change and the extended time for that consolidation. Together they contribute more-extensive factors to the practice of effective and embedded APS reform.

These conclusions are based on reforming that single APS. This thesis has identified four challenges to implementing reform uniformly with identifiable impacts across the single but dispersed APS. These are senior management’s span of influence over the geographically- dispersed APS; maintaining and consolidating reform momentum over extended time; with feedback systems providing information to management on actual reform results; to assess their effectiveness. By its later-stage examination of the actual results as originally designed into the reform’s program logic, this confirmed the importance in implementation practice of evaluation.

Evaluation has now been identified to demonstrate non-financial performance under the PGPA Act. This has now been identified, in research (Barrett, 2018; Gray & Bray, 2019; Maloney, 2017) and Finance Department advice on practice (Morton & Cook, 2018). By contrast, a recent review of the implementation of the PGPA Act could not identify what had led to the earlier demise of APS program evaluation (Alexander & Thodey, 2018). This exemplified both corporate amnesia and another turn of the management reform series. APS implementation priorities currently respond to the short-term concerns of the Government of the day. This was epitomised in the pertinent observation of a former Secretary: “the responsiveness has now become to the point where you have a passive reactive service that is so heavily tasked oriented that it can’t think beyond Christmas let alone the end of the week”. This lack of capability for managing the long-term has also been evident in the APS capability reviews, which demonstrated that the APS generally lacks skills in managing program Page | 209

performance and implementing outcomes-focussed strategies. This is the last of the five steps in public policy and its stages of “agenda setting; policy formulation; decision-making; policy implementation; policy evaluation” (Howlett & Cashore, 2014:23). Also absent is the complementary APS capability of change management: “change management is a key organisational capability, yet less than one-quarter of agencies covered by the 2013 agency survey believed their change management capability was at the desired level” (APSC, 2014c: p.100). This thesis finds that there has been no framework for implementing APS management reform with demonstrable and embedded impact over extended time.

8.9 Conclusions This research has challenged frameworks for embedding effective management reform change. A key challenge was to the assumption that the top-down reform change agent, an APS Secretary, can achieve enduring and permanent changes in both the single APS and agency practice. This study also challenged temporal meanings of long-term success in implementing reform. Any such claims of long-term success can be contrasted with the repetition of the APS management reform series summarised in figure 10 of chapter 6. In relation to the key factor of ‘extended time’, this thesis established a gap between reform commencement (short-term) and embedding its changes (in the on-going, longer-term).

Evaluation theory can provide a link between these implementation actors, through the design feature of program logic. If used by reform initiators in the initial design of a reform which is capable of being evaluated, program logic establishes the desired reform outcomes over the short, medium and long-term. Missing in that existing framework is any ideal extended time for comparing the reform intentions and their later outcomes. This study identified a similar lack of extended time in the presence of the senior change agents. The findings of this research suggest the place of extended time is a gap in implementation theory.

By analysing the outcomes of reform practice, the research findings suggest a potentially wider implementation framework. Such a framework would link the elements of extended time, geographic dispersion, management information, differential staff responsibilities for change management and the priority of evaluability. From evaluation theory, program logic aids this future evaluability, by designing the intended short, medium and long-term outcomes at the initiation of a management reform. By that initial design of management reform, the latter factor of evaluability contributes to closing that gap and enhancing implementation theory. Page | 210

This study concludes that effective implementation of embedded reform requires more systems-focused frameworks. Both change management theory and these conclusions have associated the changes of reform with senior change agents as individuals, such as Ministers or APS Secretaries. By contrast, a discovery of this thesis has been the lessened impact of those change agents after they move on, which has especially and negatively affected the maintenance of reform momentum afterwards. This thesis makes a major contribution to an implementation framework for consolidating and embedding the changes of public sector management reform. It does this with five additional factors of (1) geographic dispersion requiring (2) the management of reform changes over (3) extended time of decades by (4) different middle managers and consolidators of those changes, coupled with (5) the later evaluation of whether those changes were effective, including whether the changes were embedded in permanent practice. This opens possibilities for future research into frameworks for achieving permanent management changes in the public sector.

Page | 211

APPENDICES

Appendix 1

DEPARTMENTS OF THE AUSTRALIAN PUBLIC SERVICE @ 26 August 2018.

Agriculture and Water Resources Attorney-General’s Communications and the Arts Defence Education and Training Environment and Energy Finance Foreign Affairs and Trade Health Home Affairs Human Services Industry, Innovation and Science Infrastructure, Regional Development and Cities Jobs and Small Business Prime Minister and Cabinet Social Services Treasury Veterans’ Affairs Departments of the Australian Parliament Parliamentary Services House of Representatives Senate Parliamentary Budget Office

Source: Finance (2018a)

Page | 212

INVITATION TO PARTICIPATE Appendix 2

Page | 213

Appendix 3 PARTICPANT INFORMATION STATEMENT AND CONSENT FORM

Page | 214

Page | 215

Page | 216

Page | 217

Page | 218

Page | 219

BACKGROUND INFORMATION Appendix 4 Embedding Reform in the Australian Public Service: A Failure of Implementation? A Case Study of Implementing the 1980s Reform of Evaluation Peter Graves PhD Candidate, School of Business, UNSW ADFA Email [email protected] Supervisor Professor Deborah Blackman, Professor in Public Sector Management Strategy

Project Aim This research examines whether theories of reform implementation can result in reform initiated in the Australian Public Service (APS) being embedded and made effective for the long term. It focuses on features of the evaluation discipline that may extend current frameworks of reform and implementation theories; these features are change impacts and change outcomes. This research considers whether integrating active evaluation into theories of implementation would result in an enhanced theory of reform effectiveness. Importance of this Study There is a gap in the implementation literature, concerning the application of a longitudinal context connecting reform initiation, managing the associated change and evaluating the results which may (or may not) be eventually achieved. Using Pettigrew’s framework of longitudinal change, the research will link theories of policy implementation, change management, organisation science and evaluation, potentially to refine the policy cycle and develop the framework for implementing and successfully managing reform and organisational change. This case study examines the introduction and demise of evaluation within the 1980s APS reform of “Managing for Results” (MfR), being designed both to illustrate the introduction of a reform and also review its embedding. The APS reform in the Public Governance, Performance and Accountability Act 2013 requires agencies to demonstrate their non-financial performance and bears similarities to that 1980s reform. The study seeks to understand explain how a single APS reform (performance and effectiveness through program evaluation) was implemented but not embedded over time. Background There have been two competing reform implementation theories: “bottom up” (Pressman & Wildavsky, 1973), on the negative impacts on outcomes of street level officials and “top-down”: imposing reforms from national government (Van Meter & Van Horn, 1975). Alternatively, “implementation” can mean only starting reform (Prasser, 2004; Moran, 2013). The time taken on implementation can distort the priorities of beginning reform, especially if any single reform is overtaken by later ones (Lindquist, 2010; Sabatier, 1986). There are desirable management and leadership practices for successful implementation (Fernandez & Rainey, 2006; Kotter, 1995; Stewart & Kringas, 2003). However, there is only modest literature on evaluating the performance of competing models of public management (Boston, 2000) and there is also a lack of an overall framework for the successful implementation and management of organisational change (By, 2005). Pettigrew (1990) introduced extended time as a feature of managing effective change, through a longitudinal framework systematically connecting reform intentions, contexts and the actions of change over extended time, to assess what results were actually achieved. Later from the evaluation discipline, the diagram of program logic was proposed (Baehler, 2007; Ryan, 2004) to bridge the gap between reform initiation and outcome, but it was not incorporated in implementation models. Evaluating the outcomes from Australian Public Service (APS) reforms has been recommended (Dawkins, 1985; Fernandez & Rainey, 2006), but it is now recognised that (since 1992) these evaluations have not been undertaken (O’Flynn, 2015). Although the 1980s APS reform of MfR was claimed to be implemented (Sedgwick, 1994), there is now an absence of evidence for its long-term success (Barrett, 2014; Hawke, 2012; Hughes, 2012). By mandating that agencies demonstrate their non-financial performance, the Public Governance, Performance and Accountability Act 2013 (Barrett, 2014) is considered to be comparable with the evaluation requirements of that 1980s reform. Page | 220

References Baehler, K. (2007). Intervention Logic/Program Logic: Towards Good Practice. in Improving Implementation. Organisational Change and Project Management (J Wanna, Ed). Australia and New Zealand School of Government, ANU Press, Canberra. Barrett, P. (2014). New Development: Financial Reform and Good Governance. Public Money & Management 34(1), 59-66. Boston, J. (2000). The Challenge of Evaluating Systemic Change: the Case of Public Management Reform International Public Management Journal, 3(1), 23-46. By, T. R. (2005). Organisational Change Management: A Critical Review. Journal of Change Management, 5(4), 369-380 Dawkins, J. (1985). Reforms in the Canberra System of Public Administration, Australian Journal of Public Administration, 44(1), 59-72. Fernandez, S., Rainey, H.G. (2006). Managing Successful Organizational Change in the Public Sector, Public Administration Review, March/April, 168-176. Hawke, L. (2012). Australian Public Sector Performance Management: Success or Stagnation? International Journal of Productivity and Performance Management, 61(3), 310-328. Hughes, O. (2012). Public Sector Trends in Australia. In Emerging and Potential Trends in Public Management: An Age of Austerity. Critical Perspectives on International Public Sector Management, 1, 173-193. Kotter, J., (1995). Leading Change. Why Transformational Efforts Fail. Harvard Business Review, March-April 1995. Lindquist, E. (2010). From Rhetoric to Blueprint: The Moran Review as a Concerted, Comprehensive and Emergent Strategy for Public Service Reform. Australian Journal of Public Administration, 69(2), 115-151. Moran, T. (2013): Reforming to Create Value: Our Next Five Strategic Directions. Australian Journal of Public Administration, Vol. 72, No.1, pp. 1-6. O’Flynn, J. (2015). Public Sector Reform; The Puzzle We Can Never Solve ? in McTaggart, D., & O'Flynn, J. (2015). Public Sector Reform. Australian Journal of Public Administration, 74(1), 13-22. Pettigrew, A. M. (1990). Longitudinal Field Research on Change: Theory and Practice Organization Science, 1(3), 267-292. Prasser, S. (2004). Poor Decisions, Compliant Management and Reactive Change: The Public Sector in 2003. Australian Journal of Public Administration, 63(1), 94-103. Pressman, J. L., & Wildavsky, A. B. (1973, 1st Edition,). Implementation. University of California Press, Berkeley. Ryan, B. (2004). 17. Measuring and Managing for Performance: Lessons from Australia. in Strategies for Public Management Reform. Research in Public Policy Analysis and Management 13, 415-449. Sabatier, P. A. (1986). Top-down and Bottom-up Approaches to Implementation Research: a Critical Analysis and Suggested Synthesis. Journal of Public Policy, 6(1), 21-48. Sedgwick, S. T. (1994). Evaluation of Management Reforms in the Australian Public Service Australian Journal of Public Administration, 53(3), 341-347. Stewart, J., & Kringas, P. (2003). Change Management—Strategy and Values in Six Agencies from the Australian Public Service. Public Administration Review, 63(6), 675-688. Van Meter, D. &. Van Horn, C., (1975). The Policy Implementation Process. A Conceptual Framework. Administration & Society, 6(4): 445-488. Page | 221

Appendix 5 UNSW CANBERRA ETHICS COMMITTEE APPROVAL

Page | 222

Page | 223

Appendix 6

INTERVIEWEE QUESTIONS AND OBJECTIVES

Questions and Order OBJECTIVES

1. What was/is your background in the APS/ To assess whether the interviewee has reviewed academic world/politics/audit/ consultant? the broad range of reforms, their contexts and impact(s) on APS.

2. What is your perspective on reforms To ensure reasonably even distribution initially, to the APS? or then allow for potential bias(es) in responses and subsequent research.

3. Can you outline your role in initiating/ To establish relative personal participation in, or implementing/analysing/evaluating a Reform analysis of, any of the reforms; and if so, at what and/or its aftermath? levels of influence.

To place any contribution within its timeframe 4. For what periods were you associated? of the sequence of APS reforms.

5. Are you aware of any implementation To test a potential or actual theory-practice gap. frameworks or approaches or theories applied

6. (following from Q.5) a) What did happen? b) What actual implementation methodology To identify actual APS practice(s). was used? To test a potential or actual practice-theory gap. c) How was “success” defined? d) Was this “success” defined at the start?

7. Comments on the place of evaluation in To establish if “evaluation” is seen as a these reforms? specialist, or separate function, or part of usual management AND views on its absence.

8. What practices of embedding a reform in To bring out discussion on interpretations of the APS have you read/written/practised? “implementation”, “embedding” + “evaluation”.

9. Comments on S. 38 (1) of Public Governance, Performance and Accountability Act 2013: The accountable To discover respondents’ interpretations of authority of a Commonwealth entity must ‘performance”, “assess”, “achieving”. measure and assess the performance of the entity in achieving its purposes. To consider any interviewee judgements offered about any reform cycle perceived. with a particular focus on strengthening

the quality of non-financial performance information.

10. Other comments/issues not covered. To traverse anything not originally considered.

Page | 224

Appendix 7 INTERVIEWEES AND PERSONAL CODES USED IN CATEGORISING THE OPEN CODES FROM INTERVIEWS INTERVIEWEE DESCRIPTOR CODE Former APS Secretary #1 P.1 Professor of Public Administration #1 P.2 Professor of Public Administration #2 P.3 Academic in evaluation. P.4 Former Federal Minister #1 P.5 Former Federal Minister #2 P.6 Former APS Secretary #2 P.7 Former APS Secretary #3 P.8 Former APS Secretary #4 P.9 Serving APS SES Officer #1 P.10 Serving APS SES Officer #2 P.11 Serving APS SES Officer #3 P.12 Serving APS EL Officer #1 P.13 Serving APS EL Officer #2 P.14 Serving APS EL Officer #3 P.15 Serving APS EL Officer #2 P.16 Former APS SES Officer #1/senior ANAO official P.17 Consultant/Former APS SES Officer #2 /senior ANAO official P.18 Consultant/Former APS SES Officer #3 P.19 Former APS SES Officer #4 P.20 Former APS SES Officer #5 P.21 Consultant/Former APS SES Officer #6 P.22 Former APS SES Officer #7 P.23 Former APS SES Officer #8 /senior ANAO official P.24 Former APS EL Officer #1 P.25 Consultant/Former APS EL Officer #2 P.26 Consultant/Former APS EL Officer #3 P.27 Consultant/Former APS EL Officer #4 P.28 Consultant/Former APS EL Officer #5 P.29 Academic/former APS officer #1 P.30 Consultant (non APS) #1 P.31 Consultant (non APS) #2 P.32

Page | 225

Appendix 8 DERIVATION OF OPEN CODES FROM MAIN QUOTES Accountability a) How can you have an engaged and risk embracing workforce if they have no accountability and responsibility at more junior levels? (P.11 - serving SES officer #2); b) This whole idea around well what’s the political accountability of a Minister if a department fails to achieve something (P.12 - serving SES officer #3); c) But you can be publicly accountable and not have any impact, and not do any good (P.4 - academic in evaluation).

Annual Reports a) Nothing happens to annual reports. They’re tabled. They might provide a bit of ammunition for the Opposition during estimates. Estimates, frankly in my view, have become a farce (P.11); b) I think probably some who'd been in Parliament for a while, in an opposition role, felt frustrated with the absence of proper performance and accountability information being presented in annual reports, and being used to explain policies and policy impacts, and policy results in Senate estimates hearings and things like that (P. 10 - serving SES officer #1).

APS - culture and roles a) the weight attached to the status quo, the difficulty of really shaking that up and, frankly, the lack of political interest in the public service (P.11); b) one of the issues is just the actual will to make long term changes which pay off in the long term (P.20 - former SES officer #4); c) You just can’t produce anything that is long anymore, because people just don’t read it and it’ll just get put in the bottom shelf if it’s not short, concise, clear and if people need to get past that one page or people dispute the findings (P.13 - serving Executive Level officer #1); d) there are a number of departments in the public service and other departments who do not actually know what it is that is expected of them basically because they receive the requirements from on high often by Ministers offices, or sometimes advisors that are not in Ministers offices by the way, advisors (P. 17 - former SES officer #1 and senior ANAO official).

Government - influence on reform a) If you’ve got a Minister for two years you’re doing well” (P. 2 - Public Administration Professor #1); b) So it’s been a long time we think since there were politicians in numbers who are genuinely interested in the public service as an institution. (P.11); c) Whereas a lot of the reforms that we’re talking about were done with the public service because we had a leadership in the public service that wanted the public service to perform better and that coincided with the government’s aim of wanting the public service to perform better (P.22 - former SES officer #6).

Government - joined up a) In more recent years, the challenge has been greater in the so-called "joined -up" environments, involving participation by the private for profit and not for profit organisations, other levels of government and across Agency cooperation. These aspects have made the reform path that much more difficult (P.17); b) So there’s always been this disconnect between the ultimate objective that we’re wanting to achieve and the willingness of senior public servants and Ministers to be held to account for something which in the end they actually didn’t control (P.8 - former Secretary #3).

Page | 226

Government – responsiveness a) I think the Public Service was totally unresponsive to government until we introduced these [MfR] reforms (P.9 - former Secretary #4); b) responsiveness has now become to the point where you have a passive reactive service that is so heavily tasked oriented that it can’t think beyond Christmas let alone the end of the week. And doesn’t do a whole bunch of stuff around the management and the stewardship of the public service that it should do, which in earlier days were accepted as kind of the role of the service (P.8); c) there’s a large amount of self-censorship going on, where we choose not to tell the Minister things (P.15 - serving Executive Level officer #3).

Highlighting existing problems a) there’s no question that the uncertainties that are being created by a constantly changing public sector environment (P.17).

Independent advice about organisation a) And there’s an element of that, whether it is Chapman on higher education, the way they use, raise academic consultants around the traps for things. When they don’t know the answers and get into trouble, they call him in to see if they’ve got any smart ideas. The rest of time you’re surplus to requirement (P.2). Interviewee – personal These were any subjective comments about the interviewee’s professional background.

Management - Secretaries including existing and later commentaries a) Some Secretaries use a corporate plan as being “This is what we want the organisation to look like in three years’ time”, which is a kind of different thing from “We’re just going to deliver this program, well we are just going to do that, well we are just going to do that” (P.2); b) with this particular agency what is going to be evaluated is decided by the Secretary through their governance committee and so it’s sort of imposed from on high and rather than a grassroots thing coming up meeting in the middle somewhere (P.26 – consultant/former Executive Level officer #2); c) And a lot of this will depend upon Secretaries, the way Secretaries think and operate. There have been different Heads of PM&C over the last 20 years, I’m sure all of them think differently around certain things (P.12); d) the C.E.O. and her Deputies were keen for reform, but as long as the reforms involved changes of behaviour, of subordinates and changes to systems, but when the reforms required changes to the senior staff’s behaviour, they started bucking up and they were not interested (P.15); e) I think the Public Service is immeasurably better off when the Heads of agencies haven’t spent a lifetime in the agency (P.9); f) the responsiveness to the agenda of the government of the day has meant that the political class and its imperatives with the new cycle and the expectations of citizens and the nature of the consultative processes with social media and engagement strategies and all that kind of stuff is now as the position where the political class is so focused on the short term and the tasks that the incentive structure that the Secretary faces is short term and task oriented (P.8).

Management – SES (Senior Executive Service) a) some of them don’t want to manage. I mean a lot of people don’t want to take hard decisions. Secondly, once you create the conditions, it’s not automatic that they’ll use and take them up and that they’ll become better managers (P.2); b) you could pick up department’s annual reports, and you could go through their performance indicators, and in ten minutes flat, you could say, this is a bad indicator, this is a hopeless, this is terrible. In ten minutes. And you think, this has been through a branch head, a division head, a deputy secretary, a secretary (P.18 – consultant/former SES officer #2/former ANAO official); Page | 227

c) How can deputies be thoughtful and strategic if they’re doing the work of their junior S.E.S. officers? (P.11); d) but you would think that the senior people, or the middle to senior people have a mental model in their head. Does that mental model ever make it to paper? And I don't think that the logic models that we're producing are the same as the mental models in people's heads (P.27 – consultant/former Executive Level officer #3); e) We focus on accountability, not learning and improvement and we focus on improving the supply of performance feedback, not the demand from senior leadership for it and that’s the dynamic that really counts internationally (P.15); f) MIAC was a deliberate attempt to get people below the Department Head level (P.9); g) We had embedded a system where the Branch Head would stand in front of his people or her people and say, right, this is what I’ve heard, this is what I’m going to do about it, this is my performance improvement plan for me (P.8); h) now you’re talking about, if I’m a Branch Head I’ve got to be somebody who’s a good business planner and I’m thinking at least 12 months ahead. If I’m a Division Head I’m thinking three or four years ahead (P.1).

Ministers - policy advice to or directions from a) Peter Walsh was the Finance Minister. He had a strong interest. Prove to me that this is making a difference (P.18); b) And there is, I think, an issue within the public service of public servants doing what ministers want them to do without engaging in robust debate about what’s the best outcome. Ultimately it’s the minister’s decision and the government’s decision (P.11); c) but it’s unusual to have a Minister who takes a stand back look at the way the system is working (P.21 - former SES officer #5); d) I think there’s a lack of what I’ll call performance leadership from the centre, Finance, our Ministers, although the A.N.A.O runs the argument to try and stimulate debate (P.15); e) One of the difficulties for a Minister, unless you’re there for a very long time, is you don’t stay long enough to see iterations of a program, so that you don’t really want to chop and change all the time, so something’s going and you want to be given the chance to conclude and even if the initial reports say ‘well, maybe he needs to do better’, you can’t so, (P.5 - former federal Minister #1).

Ministers - Secretary relationships a) Once they went on to the new contracts they were basically… well they’re on either a three or a five year contract and as the Barrett one clearly illustrated, you’re there on the confidence of the Minister and they have to give no reasons why they’ve lost confidence. So a lot of them knew that. I interviewed one former Secretary, a very senior Secretary and said “When I shave in the mirror in the morning I don’t know if I’ll have a job at the end of the day” (P.2); b) We still have secretaries reporting to individual ministers, their performance is judged not on all outcomes they’ve produced for Australia but on how happy their ministers are (P.11); c) Now responsiveness to the agenda of the government of the day is kind of where we should be but the problem is that ministers are so focused on their agenda that they ignore, and the incentive structure the secretaries face ignore the rest of the job of a secretary. So the job of the secretary is not just to be a manager of the business of the government of the day, it’s also to be the steward of an enduring institution (P.8).

Parliament – Committees and Reports a) And they tend to use parliamentary investigations to embarrass their opponents rather than actually look at what’s happening in public administration (P.2); b) I said you almost need somebody within Parliament that’s independent to just be saying, here’s the questions you senators ought to be asking (P.18); c) So one fundamental point about what sustains a reform or what sustains something like evaluation is someone’s got to be interested, someone’s got to pay attention. And that has to be either Ministers or parliament or conceivably a central agency (P.22). Page | 228

Performance - ANAO Efficiency, Performance; BPGs a) The best thing that the Auditor-General did in this whole period was the promulgation of best practice guides where instead of buying into, you know, have the reforms worked or not worked, they said “If we look around and see who’s doing this, personnel management or service delivery or asset management, or whatever, what would constitute really good practice? What would a really top performing manager do?” (P.2); b) performance audits have some overlap with evaluation. But they’re in some ways quite definitely not evaluations in that Auditors General tend to say that their focus is on government policy being implemented properly, and not on whether that was a good policy (P.18); c) If you’re a Program Manager at whatever level, even at the senior level in the Department, you should be able to answer the A.N.A.O. if they ask “how will you know you’re on track to achieve your outcomes?” (P.32 – consultant (non APS) #2).

Performance - Annual Reports + IPAA Judges' Awards a) Nothing happens to annual reports. They’re tabled. They might provide a bit of ammunition for the Opposition during estimates (P.11); b) Annual reports are not a good way of reporting on performance (P.22).

Performance - central agencies' influence a) various things that the Department of Finance is trying to use to engage different departments is important to do but each of the government departments is doing their own thing, or not doing anything or cobbling something together and hope it works (P.26); b) So in the PM&C frame, I think people are now much, much more aware, for example, of the pitfalls that can happen, and the importance of connecting dots. So for a long time… as an example, for a long time Peter Shergold would constantly remind people of the importance of policy agencies talking to delivery agencies and the other people that they need, right? (P.12); c) Department of Finance are really trying hard now. They’re really, really trying to capture that deep narrative (P.4).

Performance – framework a) I somehow think better implementation frameworks and all those things are probably not the most powerful lever in comparison to the people squarely focussing on those really basic questions of, what are we really trying to achieve here? (P.18); b) We should be able to reasonably say that we can report on inputs in that context, if it’s put in context and we can report on outputs and that eventually, further down the track, if we keep the same framework or we maintain a similar kind of framework for reporting, then eventually we can report on the outcomes as they arise (P.13 - serving Executive Level officer #1); c) As with everything, same applied with FMA, the difficulty is not the legislative framework but in the implementation and that’s where I think as yet we haven’t seen PGPA changing the implementation framework that well and probably its again that’s a continuity (P.22).

Performance - including evaluation a) So evaluation’s been a bit of a sorry story in Australia overall, not just evaluation of the new management reforms and whether people do... are using their flexibilities they’ve got making better managerial decisions but also whether there is actually performance on the ground to citizens, taxpayers (P.2); b) evaluation isn’t really explained or well understood by most public servants. And so you’re working against resistance to that. And also because of the churn and things like that, at the time when the evaluation, or an evaluation would be useful, so many people have moved around that there’s really not an opportunity to bring the corporate knowledge of the original people (P.26); c) I think the other problem was it wasn’t recognised that evaluation in particular and performance information, that they’re a set of skills, that it’s not just a generic skill, that it is a professional skill that you learn how to do it properly. That set of skills was never professionalised (P.19 - former SES officer #3); Page | 229 d) I’ve developed an evaluation strategy for the Division and it outlines essentially how we will go about embedding evaluation in the Division, for the purpose of improving our policies and so part of that is an evaluation agenda, an annual evaluation agenda over the financial year (P.13); e) So it was seen as red tape, to use the modern terminology, and so it got dropped but I think that contributed to people forgetting about the importance of evaluations (P.1); f) And so I’ve been watching people in the evaluation space in government moving. They don’t stay anywhere for more than two to three years maximum. And there’s an expectation that you move onwards and upwards. Now that is a constant disruption to implementation (P.4); g) in my experience the inadequacy of measurement of outcomes and transparency of decisions and processes is usually not the fault of public servants, notwithstanding the prevailing mythology. (P.6 - former Federal Minister #2).

Performance - joined up government Commonwealth and States and Territories a) I think the national partnership agreements had quite a few positive features. And part of the reason is, I think the Commonwealth was giving money to somebody outside the Commonwealth, namely the states, and so it had an incentive to say, we’re only going to give you this money if it makes a difference. So at the beginning of the arrangement, we’re going to set up the performance indicators, we’re going to explain the rationale, and how we’re going to measure success. And subsequent funding will be contingent on you meeting these benchmarks (P.18).

Performance - long term specifically This code resulted from comments with some time-frame attached that was in years. These have included “several”, “three”, “four”, “ten”, “twenty” and the indirect “a longer-term view”.

Performance - paradigm shift This code was only designated if this phrase was specifically mentioned in that context.

Policy - strategic use in APS a) you do need to get a balance between independence, evidence-based policy advice and so on and being co-operative and I suppose my regret is that I don’t think we’ve got the right balance now (P.9); b) evidence based policy also embraces economics. I’m saying that you can go down two paths. Economics, or what I’ve called strategic policy. And the strategic policy is what’s crowded out by the predominant fascination in Canberra with economics (P.7 - former Secretary #2).

Policy - versus delivery a) So this whole wave of changing the way we operate is going to have an impact upon some of the traditional methods and practices around policy development right through to policy implementation and delivery (P.12).

Prime Ministers – influence Comments coded were from direct references to Prime Ministers by either name or the title.

Reform – Agenda Comments were coded here when they specifically referred to various agenda since 1976, such as “Coombs Royal Commission”, “Managing for Results”, “Ahead of the Game”, or the more indirect “efficiency agenda”, “commercialisation”, “contestability”. Reform - basis of a) Whereas a lot of the reforms that we’re talking about were done with the public service because we had a leadership in the public service that wanted the public service to perform better and that coincided with the government’s aim of wanting the public service to perform better (P.22).

Page | 230

Reform - Capability Reviews Coded here were references to these Capability Reviews conducted by the Australian Public Service Commission between 2011 and 2016 (APSC, 2013f).

Reform - change agents This was derived from the mention of specific individuals by name who have been associated with initiating and driving change in the APS over the past thirty to thirty-five years. There were also some minor references to the generic “individual champions of change”.

Reform – series a) the changes are often just going around in circles (P.21); b) the pendulum swung back to the middle again (P.12); c) It’s not that much of a reform. It’s just in a continuum, it’s rebadging some of the reforms that were already in place (P.22); d) Anything that’s attached to the policy cycle of the country. And the political cycles are getting smaller and smaller. They’re not three years (P.4).

Reform - developments – external a) the public service in Canberra lost sight of things that were happening outside Canberra in respect of what I would call strategic policy and very rapid improvements in how public sector management might be approached, largely influenced by developments in Australia and overseas in the private sector (P.7).

Reform - developments - internal or progressive a) a lot of those things have been internally generated. By that I mean that they’ve come from within departments and from other organisations, including the Public Service Board, often without a lot of outside prompting (P.21).

Reform – embedding a) We don't know how long it'll take to embed the reforms in the system, but in these reforms we've invested a fair bit of effort in to implementation, so we... it's not a matter of putting the formal framework in place and then walking away from it, and leaving the system to implement it itself (P.10 - Serving SES officer #1). b) governments have been about either frittering away largesse or making announcements and just moving on and not following them through.” (P.2).

Reform - evaluation or lack of a) one of the reasons the evaluation unit actually continued in employment, was because in fact you had by then, highly trained evaluators, who could still do it (P.19); b) so one of the reasons why the evaluation framework as a formal requirement is dropped, there were two reasons. One was there was a bit of rote form filling and ignoring of the evaluations at the time. The second was a feeling among a number of Ministers that they had this kind of sudden rush of blood to the head about outsourcing that if every activity of the Commonwealth public service could be put to competitive tendering it didn’t need evaluation anyway because you’d sort it out through competitive tenders (P.22); c) there was never a very clear understanding that in order to understand the impact of these reforms [“Ahead of the Game”, 2010], understand the impact they were having and whether they were having an impact, you had to do some sort of baseline assessment of things and then have an opportunity and a methodology to go back, subsequently and decide whether in fact any changes have actually taken place (P.25 - former Executive Level officer #1); d) The evaluation section are under skilled, under-resourced and really floundering I think, in my view (P.13).

Page | 231

Reform - FMIP or managing for results (specifically) This code was reserved for references to the 1980s reform titled either the Financial Management Improvement Program, or “managing for results” (Keating, 1990).

Reform – history Comments coded here concerned any of the reforms in the thirty-six year sequence since 1976 summarised in Figure 3, Chapter 2.

Reform - implementation in terminology a) there are skills involved in implementation, in getting performance information. I think those skills are a bit thin on the ground as well. But the encouragement to develop those skills and use them, is the thing that’s a bit lacking (P.18); b) So it wasn’t just about implementation issues, it was also about when do you use horizontal management rather than vertical management, and the whole issue of networks (P.1); c) Whereas I think Peter Shergold changed that [formation of Cabinet Implementation Unit, 2003] and his successors also were very focused on not only getting the right policy decision but also making sure that the implementation of that policy occurred properly and to peoples’ expectations (P.12).

Reform - implementation theory a) See the academics who have worked closely with the public service now are getting smaller and smaller (P.2); b) I somehow think better implementation frameworks and all those things are probably not the most powerful lever in comparison to the people squarely focussing on those really basic questions of, what are we really trying to achieve here? (P.18). c) so what actually happened was that the, rather than the government announcing reforms and getting the public service to implement it, it was the reverse. The public service was coming up, was given the green light to say yes we want it to change. ……….So in essence what happened then of course the academic community was quite scathing in its criticism of these reforms because the simple response was they had no theoretical underpinning. (P.17).

Reform - PGPA Act This code was derived from Q.9: Comments on S. 38 (1) of the Public Governance, Performance and Accountability Act 2013.

Reform - program logic a) In the case of the national partnerships, they did have their program logics, and the ultimate good we intend is this (P.18); b) We need to continue to invest in our thinking around how you design, how you build, how you deploy (P.12).

Reform – systemic a) and you look back on it [FMIP reforms] and see it as a coherent whole but the coherent whole wasn’t seen at the beginning (P.1); b) I think legislating for an Evaluator General, that could provide a bit of a goad on the side that would nudge things in the right kind of direction (P.18); c) one of the key systemic findings from the capability reviews was that people weren't connecting their work to the mission of the organisation (P.20).

Research - impact of a) My work has basically been academic commentary on what they’ve been doing. I’m not that normative (P.2); Page | 232 b) Australia never took a very closely theory based approach. The way I’ve explained our approaches, it’s as much as anything a trial and error approach (P.22); c) There hasn't been any capacity building training workshops, there hasn't been this research agenda around bringing experts together to create something new that is suitable (P.27); d) In fact, I was told at one stage that “we’re trying to de-academify… take the academic out of evaluation, so that it can be more useable”. My argument was we don’t have to take the academic out of evaluation, we can make it simple (P.13).

Transparency a) There are some departments sort of really hate the idea of transparency because they fear that it’ll reveal to the world that they’re not doing anything useful at all. And that’s, so there’ll always be some resistance to greater transparency, there’ll be resistance to evaluation from people who are frightened by it and sometimes the reasons they’re frightened is there good reason to be frightened (P.22).

Page | 233

PRIORITIES OF SECRETARIES AND ALL OTHERS Appendix 9 Times ALL OTHER RESPONSES Times FORMER SECRETARIES Total Noted (SES; EL; ASO) Noted REFORM 4 128 (40%) REFORM 450 (34.5%) Embedding 4 25 Embedding 55 Change agents 4 20 PGPA Act 54 Agenda 4 18 Implementation in terminology 53 Implementation: in terminology 4 11 Change agents 51 Basis of 4 9 Basis of 33 FMIP or MfR (specifically) 3 9 Systemic 29 Developments – external 3 8 Cycle 25 Developments: internal/progressive 1 7 Developments - external 24 History 4 5 Program logic (specifically) 23 Evaluation 1 4 FMIP, or MfR (specifically) 21 Systemic 3 4 History 18 PGPA Act 2 3 Evaluation or lack of 17 Capability Reviews 1 3 Implementation theory 13 Implementation – theory 1 2 Agenda 12 Series 0 0 Developments - internal 12 Program logic 0 0 Capability Reviews 10 PERFORMANCE 4 56 (17.5%) PERFORMANCE 346 (26.5%) Including Evaluation 4 20 Including evaluation 113 Long-term (specifically) 2 15 Framework 80 Central Agencies Influence 2 13 Long term (specifically) 49 Joined up government (CW/ S/Ts) 2 6 Central agencies’ influence 48 ANAO Efficiency/Performance Audits Framework 1 2 35 Best Practice Guides ANAO: Efficiency/Performance 0 0 Joined up government: CW & S +Ts 8 Audits; Best Practice Guides Annual Report/IPAA Judges’ Awards 0 0 Annual Reports/IPAA Judges' Awards 8 Paradigm shift (mentioned or not) 0 0 Paradigm shift (mentioned or not) 5 GOVERNMENT & MINISTERS 4 53 (16.5%) GENERAL 203 (15.5%) Government – influence on reform 4 16 APS - culture and roles 135 Ministers - advice or directions 4 13 Accountability 41 Government – responsiveness 3 12 Annual Reports 9 Ministers - Secretaries 4 11 Highlighting existing problems 8 Government – joined up 1 1 Interviewer - personal 5 MANAGEMENT 3 41 (13%) Transparency 3 Secretaries, commentaries 2 28 Independent advice on organisation 2 SES 3 13 GOVERNMENT & MINISTERS 141 (11%) GENERAL 3 27 (8%) Government - influence on reform 55 APS – culture and roles 2 21 Ministers - policy advice/directions 36 Accountability 1 4 Ministers - Secretary relationships 21 Highlighting existing problem 1 1 Prime Ministers - influence 15 Independent advice: organisation 1 1 Government - responsiveness 11 Annual Reports 0 0 Government - joined up 3 Interviewer - personal 0 0 MANAGEMENT 85 (6.5%) Transparency 0 0 Secretaries, existing/later 43 POLICY 3 7 (2%) SES 42 Strategic use in APS 3 5 PARLIAMENT 32 (2.5%) Versus delivery 1 2 Committees and reports 32 PRIME MINISTERS 4 5 RESEARCH 26 Influence 4 5 Impact of 26 RESEARCH 3 3 POLICY 21 Impact of 3 3 Versus delivery 11 PARLIAMENT 1 1 Strategic use in APS 10 Committees and Reports 1 1 Page | 234

REFERENCES Aberbach, J. D., & Christensen, T. (2014). Why Reforms so Often Disappoint. The American Review of Public Administration, 44(1), 3-16.

AGRAGA (2010). AHEAD OF THE GAME. Blueprint for the Reform of Australian Government Administration. Advisory Group on Reform of Australian Government Administration. Department of the Prime Minister and Cabinet, Canberra. http://apo.org.au/system/files/20863/apo-nid20863-24401.pdf

Alexander, E & Thodey, D. (2018). Independent Review into the Operation of the Public Governance, Performance and Accountability Act 2013 and Rule. Report, September. Department of Finance, Canberra. https://www.finance.gov.au/sites/all/themes/pgpa_independent_review/report/PGPA_Independent_Review_- _Final_Report.pdf

Alderman, L. (2015). Context-sensitive Evaluation: Determining the Context Surrounding the Implementation of a Government Policy. Evaluation Journal of Australasia, 15(4), 4-15.

Alford, J. (1993). Towards a New Public Management Model: Beyond “Managerialism” and its Critics. Australian Journal of Public Administration, 52(2), 135-144.

Althaus, C. (2011). Assessing the Capacity to Deliver – the BER Experience. Australian Journal of Public Administration, 70(4), 421-436.

Althaus, C., Bridgman, P., Davis, G. (2007). The Australian Policy Handbook (4th edition). Allen & Unwin, Crows nest.

Althaus, C., & McKenzie, L. (2018). Hitting the Implementation Wall. Australia and New Zealand School of Government, Canberra. https://www.anzsog.edu.au/resource-library/news- media/hitting-implementation-wall (accessed 3/9/18)

Althaus, C., & Wanna, J. (2008). The Institutionalisation of Leadership in the Australian Public Service. In t’Hart, P., & Uhr, J. (Eds). Public Leadership. Perspectives and Practices. The Australia and New Zealand School of Government, Canberra.117-132.

ANAO (1991). Implementation of Program Evaluation – Stage 1. Audit Report No. 23 1990-91. The Auditor-General, Canberra.

ANAO (1997). Program Evaluation in the Australian Public Service. Audit Report No.3. 1997- 98. Australian National Audit Office, Canberra. https://www.anao.gov.au/sites/g/files/net616/f/anao_report_1997-98_03.pdf

ANAO (2003). Annual Performance Reporting. Audit Report No 11, 2003-04. Australian National Audit Office, Canberra.

ANAO (2011): Developing and Implementing Key Performance Indicators to Support the Outcomes and Programs Framework. Audit Report No 5 2011-12. Australian National Audit Office, Canberra.

Page | 235

ANAO (2013a). Agencies’ Implementation of Performance Audit Recommendations. Audit Report 53, 2012-13. Australian National Audit Office, Canberra.

ANAO (2013b). The Australian Government Performance Measurement and Reporting Framework - Pilot Project to Audit Key Performance Indicators. Audit Report 28, 2012-13. Australian National Audit Office, Canberra.

ANAO (2014). Public Sector Governance: Strengthening Performance through Good Governance. Better Practice Guide. Australian National Audit Office, Canberra. At https://catalogue.nla.gov.au/Record/7424331

ANAO (2017). Implementation of the Annual Performance Statements Requirements 2015–16. Audit Report 58, 2016-17. Australian National Audit Office. Canberra. https://www.anao.gov.au/work/performance-audit/implementation-annual-performance-statements- requirements-2015-16

ANAO (2018). Review of Better Practice Guides. Australian National Audit Office, Canberra. At https://www.anao.gov.au/work/better-practice-guide/review-anao-better-practice-guides (accessed 5/6/19)

ANAO (2019). Implementation of ANAO and Parliamentary Committee Recommendations. Report No.6 2019–20. Australian National Audit Office, Canberra. At https://www.anao.gov.au/sites/default/files/Auditor-General_Report_2019-2020_6.pdf (accessed 7/8/19)

ANAO/DoF (1996). Better Practice Principles for Performance Information. Australian National Audit Office/Department of Finance, Canberra. http://www.policypartners.com.au/assets/anao-performance-information-principles.pdf

ANAO/DoFA (2004). Better Practice in Annual Performance Reporting. Better Practice Guide. Australian National Audit Office, Department of Finance & Administration, Canberra.

ANAO/DPM&C (2014). Successful Implementation of Policy Initiatives. Better Practice Guide. Australian National Audit Office, Department of Prime Minister & Cabinet, Canberra. At https://catalogue.nla.gov.au/Record/7424317

Andrews, J., Cameron, H. and Harris, M. (2008), All change? Managers’ Experience of Organizational Change in Theory and Practice, Journal of Organizational Change Management, 21(3), 300-314.

Andrews, R., & Esteve, M. (2015). Still Like Ships that Pass in the Night? The Relationship between Public Administration and Management Studies. International Public Management Journal, 18(1), 31-60.

ANZSOG (2017). Evidence and Evaluation Hub. The Australia and New Zealand School of Government. At https://www.anzsog.edu.au/about/evidence-and-evaluation-hub

Page | 236

ANZSOG (2019). Evaluation and Learning from Failure and Success. An ANZSOG Research Paper for the Australian Public Service Review Panel. The Australia and New Zealand School of Government. At https://www.apsreview.gov.au/resources/evaluation-and-learning-failure- and-success

Appelbaum, S.H., Habashy, S., Malo, J-L & Hisham Shafiq, H. (2012), Back to the Future: Revisiting Kotter's 1996 Change Model. Journal of Management Development, 31(8), 764 – 782.

APSC (2003). The Australian Experience of Public Sector Reform. Australian Public Service Commission, Canberra. At https://resources.apsc.gov.au/pre2005/exppsreform.pdf

APSC (2007a): Building Better Governance. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/__data/assets/pdf_file/0010/7597/bettergovernance.pdf

APSC (2007b) Tackling Wicked Problems: A Public Policy Perspective. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/publications-and-media/archive/publications- archive/tackling-wicked-problems

APSC (2009). Senior Executive Service 25th Anniversary. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/publications-and-media/archive/publications-archive/ses- 25th-anniversary

APSC (2012). State of the Service Report, 2011-12. Commissioner’s Overview. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/about-the-apsc/parliamentary/state-of-the-service/2011-12-sosr/01-commissioners-overview

APSC (2013a). Embedding APS Values. State of the Service Report 2012-13:54. Australian Public Service Commission, Canberra.

APSC (2013b). Program Effectiveness. State of the Service Report, 2012-13:8. Australian Public Service Commission, Canberra http://www.apsc.gov.au/__data/assets/pdf_file/0006/59379/SOSR-2012_13-final-tagged2.pdf

APSC (2013c). Strengthening a Values Based Culture A Plan for Integrating the APS Values into the Way We Work. Australian Public Service Commission, Canberra. https://resources.apsc.gov.au/2013/strengthen_values.pdf

APSC (2013d). Organisational Capacity. State of the Service Report 2012-13:208. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/__data/assets/pdf_file/0006/59379/SOSR-2012_13-final-tagged2.pdf

APSC (2013e). Capability Reviews - Status and Findings, Ch.10. in State of the Service Report, 2012-13, 208. Australian Public Service Commission, Canberra. At http://www.apsc.gov.au/about- the-apsc/parliamentary/state-of-the-service/sosr-2012-13/chapter-ten/capability-reviews-status-and-findings

APSC (2014a). Senior Leadership in the APS. Chapter 5, in State of the Service Report, 2013- 14. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/about-the- apsc/parliamentary/state-of-the-service/state-of-the-service-2013-14/sosr-2013-14/theme-two- effectiveness/chapter-5/senior-leadership-in-the-aps

Page | 237

APSC (2014b). State of the Service Report, 2013- 14. Australian Public Service Commission, Canberra. https://www.apsc.gov.au/sites/g/files/net4441/f/sosr-2013-14-web.pdf

APSC (2014c). Managing for Change. Chapter 6, State of the Service Report 2013-14. Australian Public Service Commission, Canberra.https://www.apsc.gov.au/sites/g/files/net4441/f/sosr- 2013-14-web.pdf

APSC (2015b). APS Statistical Bulletin 2014-15: APS at a Glance. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/about-the-apsc/parliamentary/aps-statistical- bulletin/statistics-2015/main-features

APSC (2016). Capability Review Program. Australian Public Service Commission, Canberra http://www.apsc.gov.au/priorities/capability-reviews (accessed 30/5/19)

APSC (2018a). Location and Regional Staff, at 31 December 2017. Australian Public Service Commission, Canberra. At https://www.apsc.gov.au/location-and-regional-staff (accessed 16/1/19)

APSC (2018b) Australian Public Service Statistical Bulletin: December 2018. Table 10: All Employees: Location by Base Classification and Gender, Australian Public Service Commission, Canberra. (accessed 29/5/19) http://www.apsc.gov.au/about-the-apsc/parliamentary/aps-statistical-bulletin/statisticalbulletin16-17

APSC (2018c). State of the Service Report 2017-18. Australian Public Service Commission, Canberra. At https://www.apsc.gov.au/state-service-report-2017-18

APSC (2018d). APS Employment Data. 30 June 2018 Release. Australian Public Service Commission, Canberra. At https://www.apsc.gov.au/aps-employment-data-30-june-2018-release

APSC (2019). The Role of the Senior Executive Service. Australian Public Service Commission, Canberra. At https://www.apsc.gov.au/senior-executive-service-ses-0 (accessed 14/8/19)

APSJobs (2017). Vacancy: Chief Evaluation Officer (EL 2), Department of Social Services. APSJobs, Australian Government website. https://www.apsjobs.gov.au/SearchedNoticesView.aspx?Notices=10706066%3A1&mn=JobSearch

APS Review (2019). Secretaries Board Driving Outcomes Across Government and APS Performance. Independent Review of the APS, Canberra. At https://contribute.apsreview.gov.au/secretaries-board (accessed 26/3/19)

Armenakis, A. A., & Bedeian, A. G. (1999). Organizational Change: A Review of Theory and Research in the 1990s. Journal of Management, 25(3), 293-315.

Armenakis, A. A., Harris, S. G., Feild, H. S. (2000). Making Change Permanent. A Model for Institutionalizing Change Interventions. Research in Organizational Change and Development, 12, 97-128.

Attride-Stirling, J. (2001). Thematic Networks: an Analytic Tool for Qualitative Research Qualitative Research, 1(3), 385-405. Page | 238

Aucoin, P. (1990). Administrative Reform in Public Management: Paradigms, Principles, Paradoxes and Pendulums. Governance, 3(2), 115-137.

Aucoin, P. (2012). New Political Governance in Westminster Systems: Impartial Public Administration and Management Performance at Risk. Governance, 25(2), 177-199.

Australian Government (2018a). Independent Review of the Australian Public Service. Canberra. https://www.apsreview.gov.au/about

Australian Government (2018b). We’re Doing Our Homework. The Lessons from the Past. Independent Review of the Australian Public Service. Canberra. https://www.apsreview.gov.au/news/were-doing-our-homework

Baehler, K. (2003). ‘Managing for Outcomes’: Accountability and Thrust. Australian Journal of Public Administration, 62(4), 23-34.

Baehler, K. (2007). Intervention Logic/Program Logic: Towards Good Practice. in Improving Implementation. Organisational Change and Project Management (J Wanna, Ed). Australia and New Zealand School of Government, ANU Press, Canberra.

Baker, S. E., & Edwards, R. (2012). How Many Qualitative Interviews is Enough ? National Centre for Research Methods Review Paper. University of Southampton.

Banks, G (2005). Structural Reform Australian-Style: Lessons for Others? Presentation to the IMF, World Bank and OECD, May. Productivity Commission, Canberra, http://www.pc.gov.au/news-media/speeches/cs20050601

Banks, G. (2014). Restoring Trust in Public Policy: What Role for the Public Service? Australian Journal of Public Administration, 73(1), 1-13.

Barker, L., McKeown, T., Wolfram Cox, J., & Bryant, M. (2018). More of the Same? A Dual Case Study Approach to Examining Change Momentum in the Public Sector. Australian Journal of Public Administration. 77(2), 253-271.

Barrett, P. (2001a). Evaluation and Performance Auditing: Sharing the Common Ground. Address to the Australasian Evaluation Society. Australian National Audit Office, Canberra. At https://www.anao.gov.au/work/speech/evaluation-and-performance-auditing-sharing-common-ground

Barrett, P. (2001b). Retention of Corporate Memory and Skills in the Public Service. Canberra Bulletin of Public Administration, (100), 1-7, June. https://search.informit.com.au/fullText;dn=200113887;res=IELAPA.

Barrett, P. (2002). Achieving Better Practice Corporate Governance in the Public Sector. Speech by the Auditor-General to the International Quality and Productivity Centre Seminar. 26 June. Australian National Audit Office, Canberra. At https://www.anao.gov.au/work/speech/achieving- better-practice-corporate-governance-public-sector. Accessed 5/9/18

Barrett, P. (2004a). Results Based Management and Performance Reporting – an Australian Perspective. Address to UN Results Based Management Seminar. 5 October. At http://anao.gov.au/uploads/documents/Results_Based_Management_and_Performance_Reporting1.pdf Page | 239

Barrett, P. (2004b) ANAO's Role in Encouraging Better Public Sector Governance. Address to ANZSOG Students at ANU, 29 September. Australian National Audit Office, Canberra. At http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.649.1003&rep=rep1&type=pdf

Barrett, P. (2012). Performance Auditing—Addressing Real or Perceived Expectation Gaps in the Public Sector. Public Money & Management, 32(2), 129-136.

Barrett, P. (2014). New Development: Financial Reform and Good Governance. Public Money & Management, 34(1), 59-66.

Barrett, P. (2016). New Development: Financial Reforms Played in Two Octaves — Yet Again? Public Money & Management, 36(4), 307-312.

Barrett, P. (2017). Effectiveness of the Development and Implementation of Australian Public Sector Management and Financial Reforms: E= MC2. Public Money & Management, 37(6), 451-456,

Barrett, P. (2018). New Development: Parliamentary ‘watchdogs’ Taking a Higher Profile on Government Programme Performance and Accountability? Public Money & Management, 38(6), 471-476.

Barrett, S. M. (2004). Implementation Studies: Time for a Revival? Personal Reflections on 20 years of Implementation Studies. Public Administration, 82(2), 249-262.

Barton, A. (2009). The Use and Abuse of Accounting in the Public Sector Financial Management Reform Program in Australia. Abacus, 45(2), 221-248.

Bartos, S. (1995). Current Developments in Performance Information. Australian Journal of Public Administration, 54(3), 386-392.

Bartos, S. (2003). Creating and Sustaining Innovation. Australian Journal of Public Administration, 62(1), 09-14.

Barzelay, M. (2007). Learning from Second-hand Experience: Methodology for Extrapolation - Oriented Case Research. Governance, 20(3), 521-543.

Baskarada, S. (2014). Qualitative Case Study Guidelines. The Qualitative Report, 19(40), 1-25.

Battilana, J., Gilmartin, M., Sengul, M., Pache, A. C., & Alexander, J. A. (2010). Leadership Competencies for Implementing Planned Organizational Change. The Leadership Quarterly, 21(3), 422-438.

Beck, N., Brüderl, J., & Woywode, M. (2008). Momentum or Deceleration? Theoretical and Methodological Reflections on the Analysis of Organizational Change. Academy of Management Journal, 51(3), 413-435.

Beer, C. (2009). National Capital Bureaucracy as a Spatial Phenomenon: The Place of Canberra within the Australian Public Service. Administration & Society, 41(6), 693-714.

Page | 240

Bergmann, A., Grossi, G., Rauskala, I., & Fuchs, S. (2016). Consolidation in the Public Sector: Methods and Approaches in Organisation for Economic Co-operation and Development Countries. International Review of Administrative Sciences, 82(4), 763-783.

Bickman, L. (1987). The Functions of Program Theory. New Directions for Program Evaluation, 1987 (33), 5-18.

Biernacki, P., & Waldorf, D. (1981). Snowball Sampling: Problems and Techniques of Chain Referral Sampling. Sociological Methods & Research,10(2), 141-163.

Blackman, D. (2015). Employee Performance Management in the Public Sector – A Process without a Cause. in West, D., & Blackman, D. (2015). Performance Management in the Public Sector. Australian Journal of Public Administration, 74(1), 73-81.

Blackman, D. A., Buick, F., O'Donnell, M., O'Flynn, J. L., & West, D. (2013). Strengthening the Performance Framework: Towards a High Performing Australian Public Service. Australian Public Service Commission, Canberra. http://www.apsc.gov.au/publications-and-media/current-publications/strengthening-performance

Blalock, A. B. (1999). Evaluation Research and the Performance Management Movement: from Estrangement to Useful Integration? Evaluation, 5(2), 117-149.

Boin, A., & Christensen, T. (2008). The Development of Public Institutions: Reconsidering the Role of Leadership. Administration & Society, 40(3), 271-297.

Boston, J. (2000). The Challenge of Evaluating Systemic Change: the Case of Public Management Reform. International Public Management Journal, 3(1), 23-46.

Bourgeois, I. (2016). Performance Measurement as Precursor to Organisational Evaluation Capacity Building. Evaluation Journal of Australasia, 16(1), 11-18.

Bourgeois, I., & Cousins, J. B. (2013). Understanding Dimensions of Organizational Evaluation Capacity. American Journal of Evaluation, 34(3), 299-319.

Bovaird, T. (2014). Attributing Outcomes to Social Policy Interventions–‘Gold Standard’ or ‘Fool's Gold’ in Public Policy and Management? Social Policy & Administration, 48(1), 1-23.

Bovaird, T., & Russell, K. (2007). Civil Service Reform in the UK, 1999–2005: Revolutionary Failure or Evolutionary Success? Public Administration, 85(2), 301-328

Bowen, G. A. (2008). Naturalistic Inquiry and the Saturation Concept: a Research Note. Qualitative Research, 8(1), 137-152.

Bowen, G. A. (2009). Document Analysis as a Qualitative Research Method. Qualitative Research Journal, 9(2), 27-40.

Boyne, G. A. (2003). Sources of Public Service Improvement: A Critical Review and Research Agenda. Journal of Public Administration Research and Theory, 13(3), 367-394.

Page | 241

Bozeman, B. (2013). What Organization Theorists and Public Policy Researchers Can Learn from One Another: Publicness Theory as a Case-in-Point. Organization Studies, 34(2), 169- 188.

Bozeman, B., & Bretschneider, S. (1986). Public Management Information Systems: Theory and Prescription. Public Administration Review, 46, 475-487.

Bradbury, D. (2013). Second Reading Speech. Public Governance, Performance and Accountability Bill. Australian Parliament Hansard, 16 May. 3447-3449. http://parlinfo.aph.gov.au/parlInfo/genpdf/chamber/hansardr/135b167f-578d-4414-a5b0- b2a35bc1cc32/0029/hansard_frag.pdf;fileType=application%2Fpdf

Breidahl, K. N., Gjelstrup, G., Hansen, H. F., & Hansen, M. B. (2017). Evaluation of Large- Scale Public-Sector Reforms: A Comparative Analysis. American Journal of Evaluation, 38(2), 226-245.

Bridgman, P., & Davis, G. (2003). What Use is a Policy Cycle? Plenty, if the Aim is Clear. Australian Journal of Public Administration, 62(3), 98-102.

Briggs, L. (2007a). Program Management and Organisational Change: New Directions for Implementation. in (Wanna, J. Ed) Improving Implementation. Organisational Change and Project Management). Australia and New Zealand School of Government, ANU Press, Canberra.

Briggs, L. (2007b). Commissioner’s Forward. Building Better Governance. Australian Public Service Commission, Canberra. iii. https://www.apsc.gov.au/building-better-governance

Broadbent, J. (2013). Reclaiming the Ideal of Public Service, Public Money & Management, 33(6), 391-394.

Broadbent, J. (2017). Academic Evidence, Policy and Practice. Public Money & Management, 37(4), 233-236.

Brown, K., Waterhouse, J., & Flynn, C. (2003). Change Management Practices: is a Hybrid Model a Better Alternative for Public Sector Agencies? International Journal of Public Sector Management, 16(3), 230-241.

Brunetto, Y., & Teo, S. T. (2018). Editorial Special Issue: The Impact of Organizational Change on Public Sector Employee Outcomes. Australian Journal of Public Administration, 77(2), 149-153.

Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and Teaching Evaluative Thinking: Insights from Research on Critical Thinking. American Journal of Evaluation, 36(3), 375-388.

Buick, F., Blackman, D., & Johnson, S. (2018). Enabling Middle Managers as Change Agents: Why Organisational Support Needs to Change. Australian Journal of Public Administration, 77(2), 222-235.

Page | 242

Buick, F., Blackman, D., O'Donnell, M. E., O'Flynn, J. L., & West, D. (2015). Can Enhanced Performance Management Support Public Sector Change? Journal of Organizational Change Management, 28(2), 271-289.

Buick, F., Blackman, D., O'Flynn, J., O'Donnell, M., & West, D. (2016). Effective Practitioner – Scholar Relationships: Lessons from a Coproduction Partnership. Public Administration Review, 76(1), 35-47.

Burgess, V. (2017). Mandarins Say it’s Time for a New Royal Commission to Rethink Future and Role of the Federal Public Sector. The Mandarin, 26 July. http://www.themandarin.com.au/81812-verona-burgess-dennis-richardson-royal-commission-job-of-the- australian-public-service/

Burroughs, J. (2018): Review of Implementation. Journal of Public Affairs Education, On-line 18/12/18. At https://www.tandfonline.com/doi/full/10.1080/15236803.2018.1555732 (accessed 17/1/19).

By, R., T. (2005). Organisational Change Management: A Critical Review. Journal of Change Management, 5(4), 369-380.

By, R.T., Hughes, M., Ford, J. (2016). Change Leadership: Oxymoron and Myths. Journal of Change Management, 16(1), 8-17,

Caiden, G (1967). The Commonwealth Bureaucracy. University Press.

Cairney, P. (2013). Standing on the Shoulders of Giants: How do we Combine the Insights of Multiple Theories in Public Policy Studies? Policy Studies Journal, 41(1), 1-21.

Campbell, C. (2001). Juggling Inputs, Outputs, and Outcomes in the Search for Policy Competence: Recent Experience in Australia. Governance, 14(2), 253-282.

Carey, G., Buick, F., & Malbon, E. (2018). The Unintended Consequences of Structural Change: When Formal and Informal Institutions Collide in Efforts to Address Wicked Problems. International Journal of Public Administration, 41(14), 1169-1180.

Carey, G., Neville, A., Kay, A., Malbon, E. (2019). Managing Staged Policy Implementation: Balancing Short‐Term Needs and Long‐Term Goals Social Policy & Administration. First Published: 26 July 2019.

Chavez, C. (2008). Conceptualizing from the Inside: Advantages, Complications, and Demands on Insider Positionality. The Qualitative Report, 13(3), 474-494.

Christensen, T., & Lægreid, P. (2007). The Whole‐of‐Government Approach to Public Sector Reform. Public Administration Review, 67(6), 1059-1066.

Cinite, I., Duxbury, L. E., & Higgins, C. (2009). Measurement of Perceived Organizational Readiness for Change in the Public Sector. British Journal of Management, 20(2), 265-277.

Coates, J. (1992). Parliamentary Use of Evaluation Data in Program Performance Statements. Australian Journal of Public Administration, 51(4), 450-454.

Page | 243

Colvin, A. (2019). Transformation: Modernising the AFP. Australian Federal Police Commissioner’s address to seminar of Institute of Public Administration, Australia. Canberra, 29 July. https://www.themandarin.com.au/112772-if-he-could-turn-back-time-andrew-colvins-tips-on-change- management-in-hindsight/

Commission of Audit (1996). National Commission of Audit. Now at https://www.michaelsmithnews.com/2013/10/the-howard-government-national-commission-of-audit- march-1996.html

Coombs, H. C., (1977). The Royal Commission on Australian Government and Administration: 1974-76. Public Administration, 55(3), 269-280.

Corbin, J. M., & Strauss, A. (1990). Grounded Theory Research: Procedures, Canons, and Evaluative Criteria. Qualitative Sociology, 13(1), 3-21.

Council of Federal Financial Relations (n.d.). Intergovernmental Agreement on Federal Financial Relations. Department of Prime Minister & Cabinet, Canberra. At http://www.federalfinancialrelations.gov.au/content/intergovernmental_agreements.aspx

Council on Federal Financial Relations (2011). Conceptual Framework for Performance Reporting. Department of Prime Minister & Cabinet, Canberra. At http://www.federalfinancialrelations.gov.au/content/performance_reporting/conceptual_framework_performanc e_reporting_feb_11.pdf

CPSA (1902). The Permanent Head. Commonwealth Public Service Act, 1902, Section 12(2). Federal Register of Legislation. At www.legislation.gov.au/Details/C1902A00005

Craft, J., & Halligan, J. (2017). Assessing 30 years of Westminster Policy Advisory System Experience. Policy Sciences, 50, 47-62.

Crawford, L., Costello, K., Pollack, J., & Bentley, L. (2003). Managing Soft Change Projects in the Public Sector. International Journal of Project Management, 21(6), 443-448.

Cuban, L. (1990). Reforming Again, Again, and Again. Educational Researcher,19(1), 3-13.

Davis, N. (2017). The Annual Reporting Practices of an Australian Commonwealth Government Department: An Instance of Deinstitutionalisation. Accounting History, 22(4), 425-449.

Davis, N., & Bisman, J. E. (2015). Annual Reporting by an Australian Government Department: A Critical Longitudinal Study of Accounting and Organisational Change. Critical Perspectives on Accounting, 27, 129-143.

Davis, G., & Wood, T. (1998). Is There a Future for Contracting in the Australian Public Sector? Australian Journal of Public Administration, 57(4), 85-97.

Dawkins, J. (1983). Reforming the Australian Public Service. Australian Government Publishing Service, Canberra.

Dawkins, J. (1985). Reforms in the Canberra System of Public Administration, Australian Journal of Public Administration, 44(1), 59-72. Page | 244

DeGroff, A., & Cargo, M. (2009). Policy Implementation: Implications for Evaluation. New Directions for Evaluation, 124, 47-60. deLeon, P. (1988). The Contextual Burdens of Policy Design. Policy Studies Journal, 17(2), 297-309. deLeon, P., & deLeon, L. (2002). What ever Happened to Policy Implementation? An Alternative Approach. Journal of Public Administration Research and Theory: 12(4), 467-492.

Deutsch, C. P. (1981). The Behavioral Scientist: Insider and Outsider. Journal of Social Issues, 37(2), 172-191.

Di Francesco, M. (1999). Measuring Performance in Policy Advice Output: Australian Developments. International Journal of Public Sector Management, 12(5), 420-431.

Di Francesco, M. (2000). An Evaluation Crucible: Evaluating Policy Advice in Australian Central Agencies. Australian Journal of Public Administration, 59(1), 36-48.

Di Francesco, M. (2001). Process Not Outcomes in New Public Management? ‘Policy Coherence’ in Australian Government. The Drawing Board: An Australian Review of Public Affairs, 1(3), 103-116.

Dixon, J. (1996). Reinventing Civil Servants: Public Management Development and Education to Meet the Managerialist Challenge in Australia. Journal of Management Development, 15(7), 62-82.

Dixon, J., Kouzmin, A., & Korac-Kakabadse, N. (1998). Managerialism-Something Old, Something Borrowed, Little New: Economic Prescription versus Effective Organizational Change in Public Agencies. International Journal of Public Sector Management, 11(2/3), 164- 187.

Downing, L., & Rogan, S. (2016). Evaluation as an Integrated Management Tool: Embedding an Evaluator into a Program. Evaluation Journal of Australasia, 16(2), 4-14.

DPM&C/ANAO. (2006). Implementation of Programme and Policy Initiatives. Making Implementation Matter. Better Practice Guide. Department of Prime Minister & Cabinet/ Australian National Audit Office. Canberra.

DSS (2019). Departmental Organisation Chart at 1 April. Department of Social Services. Canberra. At https://www.dss.gov.au/about-the-department/overview/organisation-charts

Durand, R., Decker, P. J., & Kirkman, D. M. (2014). Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure. American Journal of Evaluation, 35(3), 404-418.

Eisenhardt, K. M. (1989). Building Theories from Case Study Research. Academy of Management Review, 14(4), 532-550.

Eisenhardt, K. M., & Graebner, M. E. (2007). Theory Building from Cases: Opportunities and Challenges. Academy of Management Journal, 50(1), 25-32. Page | 245

Elmore, R. F. (1979). Backward Mapping: Implementation Research and Policy Decisions Political Science Quarterly, 94(4), 601-616.

Etikan, I., & Bala, K. (2017). Sampling and sampling methods. Biometrics & Biostatistics International Journal, 5(6), 215-217.

Fernandez, S., & Rainey, H.G. (2006). Managing Successful Organizational Change in the Public Sector, Public Administration Review, 66(2), 168-176.

Finance (2000). The Outcomes & Outputs Framework Guidance Document, November. Department of Finance and Administration, Canberra. At http://www.finance.gov.tt/wp-content/uploads/2013/11/pubA7AACE.pdf

Finance (2008). What is Operation Sunlight? Department of Finance and Deregulation, Canberra. http://www.finance.gov.au/archive/financial-framework/financial-management- policy-guidance/operation-sunlight/

Finance (2013). Measurement and the Australian Public Service. In Performance Reference Model. Department of Finance, Canberra. https://www.finance.gov.au/sites/default/files/aga- ref-models.pdf

Finance (2014). Enhanced Commonwealth Performance Framework. Discussion Paper. Department of Finance, Canberra. August. https://www.finance.gov.au/sites/default/files/enhanced-commonwealth-performance-framework-discussion-paper.pdf

Finance (2016). Corporate Plans 2015-16: Lessons Learned (p.2). Public Management Reform Agenda. Department of Finance, Canberra. At https://cfar.govspace.gov.au/files/2012/11/2015-16-Corporate-Plan-Lessons-Learned-Final.pdf

Finance (2017a). Overview of the Enhanced Commonwealth Performance Framework. Resource Management Guide 130. Department of Finance, Canberra. http://www.finance.gov.au/sites/default/files/rmg-130-overview-of-the-enhanced-commonwealth-performance-framework_0.pdf

Finance (2017b). Corporate Plan for Commonwealth Entities. PGPA Rule Section 16E. Department of Finance, Canberra. https://www.finance.gov.au/resource-management/pgpa-rule/

Finance (2017c). Resource Management Guidance - PGPA Act. Managing Performance. Department of Finance, Canberra. At http://www.finance.gov.au/resource-management/performance/

Finance (2017d). Public Management Reform Agenda. Department of Finance, Canberra. http://www.finance.gov.au/resource-management/pmra/

Finance (2017e). Public Governance, Performance and Accountability Act 2013 and Rule – Independent Review. Department of Finance, Canberra. http://www.finance.gov.au/pgpa-independent-review/#intro

Finance (2017f). Developing Good Performance Information. Resource Management Guide 131. Department of Finance, Canberra. http://www.finance.gov.au/sites/default/files/RMG%20131%20Developing%20good%20performance%20information.pdf

Page | 246

Finance (2018a). Flipchart of PGPA Act Commonwealth Entities and Companies. Department of Finance, 28 August. Canberra. https://www.finance.gov.au/sites/default/files/Flipchart_28%20August.pdf (accessed 4/9/18)

Finance (2018b). Public Management Reform Agenda. Communications. PMRA Community of Practice. Department of Finance, Canberra. https://www.finance.gov.au/resource-management/pmra/communications/#cop

Flyvbjerg, B. (2006). Five Misunderstandings about Case-Study Research. Qualitative Inquiry, 12(2), 219-245.

Fryer, K. J., & Ogden, S. M. (2014). Modelling Continuous Improvement Maturity in the Public Sector: Key Stages and Indicators. Total Quality Management & Business Excellence, 25(9-10), 1039-1053.

Funnell, S. C. (2000). Developing and Using a Program Theory Matrix for Program Evaluation and Performance Monitoring. New Directions for Evaluation, (87), 91-101.

Funnell, S.C., & Rogers, P. (2011). Purposeful Program Theory. Effective Use of Theories of Change and Logic Models. Jossey Bass, San Francisco.

Fusch, P. I., & Ness, L. R. (2015). Are We There Yet? Data Saturation in Qualitative Research. The Qualitative Report, 20(9), 1408-1416.

Gair, S. (2012). Feeling Their Stories Contemplating Empathy, Insider/Outsider Positionings, and Enriching Qualitative Research. Qualitative Health Research, 22(1), 134-143.

Gerrish, E. (2016). The Impact of Performance Management on Performance in Public Organizations: A Meta‐Analysis. Public Administration Review, 76(1), 48-66.

Ghobadian, A., Viney, H., & Redwood, J. (2009). Explaining the Unintended Consequences of Public Sector Reform. Management Decision, 47(10), 1514-1535.

Gill, R. (2002). Change Management – or Change Leadership? Journal of Change Management, 3(4), 307-318.

Gold, J. (2017). Tracking Delivery. Global Trends and Warning Signs in Delivery Units. Institute for Government, London. https://www.instituteforgovernment.org.uk/publications/tracking- delivery

Goldfinch, S. (1999). Remaking Australia’s Economic Policy: Economic Policy Decision‐ Makers During the Hawke and Keating Labor Governments. Australian Journal of Public Administration, 58(2), 3-20.

Goodman, L. A., (1961) Snowball Sampling. The Annals of Mathematical Statistics, 32(1), 148-170.

Goodman, L. A. (2011). Comment: On Respondent-Driven Sampling and Snowball Sampling in Hard-to-Reach Populations and Snowball Sampling not in Hard-to-Reach Populations. Sociological Methodology, 41(1), 347-353. Page | 247

Gray, M & Bray, J.R (2019). Evaluation in the Australian Public Service: Current State of Play, Some Issues and Future Directions. An ANZSOG Research Paper for the Australian Public Service Review Panel. Australia & New Zealand School of Government, Canberra. At https://www.apsreview.gov.au/sites/default/files/resources/appendix-b-evaluation-aps.pdf

Gruen, N. (2016). Why Australia Needs an Evaluator-General. The MANDARIN. On-line 9 May. At https://www.themandarin.com.au/64566-nicholas-gruen-evaluator-general-part-two/

Grube, D. (2011). What the Secretary Said Next: ‘Public Rhetorical Leadership’ in the Australian Public Service. Australian Journal of Public Administration, 70(2), 115-130.

Grube, D. C., & Howard, C. (2016). Promiscuously Partisan? Public Service Impartiality and Responsiveness in Westminster Systems. Governance, 29(4), 517-533.

Guba, E. G. (1981). Criteria for Assessing the Trustworthiness of Naturalistic Inquiries. Educational Communication & Technology, 29 (2), 75-91.

Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews are Enough? An Experiment with Data Saturation and Variability. Field Methods,18(1), 59-82.

Guthrie, J., & English, L. (1997). Performance Information and Programme Evaluation in the Australian Public Sector, International Journal of Public Sector Management, 10(3), 154-164.

Guthrie, J. E., & Parker, L. D. (1999). A Quarter of a Century of Performance Auditing in the Australian Federal Public Sector: A Malleable Masque. Abacus, 35(3), 302-332.

Guthrie, J., Parker, L., & English, L. M. (2003). A Review of New Public Financial Management Change in Australia. Australian Accounting Review, 13(30), 3-9.

Hall, W. (1995). The National Training Reform Agenda. Australian Economic Review, 28(2), 87-92.

Halligan, J. (1995). Policy Advice and the Public Service.in Peters, B.G., & Savoie, D. J. (Eds). Governance in a Changing Environment. Canadian Centre for Management Development, McGill-Queen’s University Press, 138-172.

Halligan, J. (2001). Contribution of the Australian Public Service to Public Administration and Management Canberra Bulletin of Public Administration, 101, 20-25.

Halligan, J. (2003). Public Sector Reform and Evaluation in Australia and New Zealand in Wollman, H. Evaluation in Public-Sector Reform: Concepts and Practice in International Perspective, Cheltenham. 80-103.

Halligan, J. (2005). Public Management and Departments: Contemporary Themes – Future Agendas. Australian Journal of Public Administration, 64(1), 1-15.

Halligan, J. (2007). Reintegrating Government in Third Generation Reforms of Australia and New Zealand. Public Policy and Administration, 22(2), 217-238.

Page | 248

Halligan, J (2010). The Australian Public Service: New Agendas and Reform. Ch. 3. In Auchlich, C., Evans, M. (Eds) The Rudd Government: Australian Commonwealth Administration 2007 – 2010. Australian New Zealand School of Government. ANU Press, Canberra. At https://press.anu.edu.au/publications/series/anzsog/rudd-government

Halligan, J. (2013). The Evolution of Public Service Bargains of Australian Senior Public Servants. International Review of Administrative Sciences, 79(1), 111-129.

Halligan, J. (2018). Leadership and Public Sector Reform in Australia. In Berman E. & Prasojo, E. (ed.). Leadership and Public Sector Reform in Asia. Public Policy and Governance, 30, Chapter 10. 231-255. Emerald Publishing Limited, Bingley. At https://www.emeraldinsight.com/doi/pdfplus/10.1108/S2053-769720180000030010

Halligan, J., & Adams, J. (2004). Security, Capacity and Post‐Market Reforms: Public Management Change in 2003. Australian Journal of Public Administration, 63(1), 85-93.

Halligan, J., & Wills, J. (2008). The Centrelink Experiment: Innovation in Service Delivery. Australia New Zealand School of Government. ANU E Press. Canberra.

Hamburger P. (2007) Coordination and Leadership at the Centre of the Australian Public Service. In: Koch R., Dixon J. (eds) Public Governance and Leadership. 207-231. Deutscher Universitäts-Verlag, Weisbaden.

Hamburger, P., Stevens, B., & Weller, P. (2011). A Capacity for Central Coordination: The Case of the Department of the Prime Minister and Cabinet. Australian Journal of Public Administration, 70(4), 377-390.

Hanwright, J., & Makinson, S. (2008). Promoting Evaluation Culture: The Development and Implementation of an Evaluation Strategy in the Queensland Department of Education, Training and the Arts. Evaluation Journal of Australasia, 8(1), 20.

Harwood, J., & Phillimore, J. (2012). The Effects of COAG's National Reform Agenda on Central Agencies. The John Curtin Institute of Public Policy/Australia and New Zealand School of Government. Melbourne. https://www.anzsog.edu.au/preview-documents/research- output/5007-the-effects-of-coag-s-national-reform-agenda-on-central-agencies/

Hatry, H. P. (2013). Sorting the Relationships Among Performance Measurement, Program Evaluation, and Performance Management. New Directions for Evaluation, 137, 19-32.

Hawke, R.L.J. (1983). Reforming the Australian Public Service. Prime Minister of Australia. Australian Government Publishing Service. Canberra.

Hawke, L. (2007). Performance Budgeting in Australia. OECD Journal on Budgeting, 7(3), 133-147. Organisation for Economic Co-operation and Development, Paris. At http://www.oecd-ilibrary.org/governance/performance-budgeting-in-australia_budget-v7-art17-en

Hawke, L. (2012). Australian Public Sector Performance Management: Success or Stagnation? International Journal of Productivity and Performance Management, 61(3), 310-328.

Page | 249

Hawke, L. (2016). Case Studies. Australia, Chapter 5. In Moynihan, D. & Beazley, I. (2016). Toward Next-Generation Performance Budgeting Lessons from the Experiences of Seven Reforming Countries. Directions in Development. World Bank, Washington. https://openknowledge.worldbank.org/bitstream/handle/10986/25297/9781464809545.pdf?sequence=2&isAllowed=y

Head, B. W. (2008a). Wicked Problems in Public Policy. Public Policy, 3(2), 101-118.

Head, B. W. (2008b). Three Lenses of Evidence‐Based Policy. Australian Journal of Public Administration, 67(1), 1-11. Head, B. W. (2015). Relationships between Policy Academics and Public Servants: Learning at a Distance? Australian Journal of Public Administration, 74(1), 5-12.

Head, B. W. (2016). Toward More “Evidence‐Informed” Policy Making? Public Administration Review, 76(3), 472-484.

Head, B. W., & Alford, J. (2015). Wicked Problems: Implications for Public Policy and Management. Administration & Society, 47(6), 711-739

Head, B. & O’Flynn, J. (2015). Australia: Building Policy Capacity for Managing Wicked Policy Problems. In Massey, A., & Johnston, K. (Eds.). (2015). The International Handbook of Public Administration and Governance. Edward Elgar Publishing. 341-368

Health (2018a). Management Structure Chart. Department of Health, Canberra. http://www.health.gov.au/internet/main/publishing.nsf/Content/24BEDAF18381C86ACA257BF0001 E0193/$File/Departmental-Structure-Chart-19-November-2018.pdf (accessed 18/12/18)

Health (2018b). Smoking Prevalence Rates. National Health Survey Results: 1990, 1995, 2001, 2004-05, 2007-08, 2011-12, 2014-15. Department of Health, Canberra. At http://www.health.gov.au/internet/publications/publishing.nsf/Content/tobacco-control- toc~smoking-rates (accessed 19/11/18)

Hehir, G. (2016). A Reflection of How Far Performance Auditing Has Come from its Roots in the 1970s to Where We Are Today and Where We Are Heading. Speech at the IMPACT Conference, Brisbane. 15 March. Australian National Audit Office, Canberra. https://www.anao.gov.au/work/speech/reflection-how-far-performance-auditing-has-come- its-roots-1970s-where-we-are-today-and

Hellawell, D. (2006). Inside–out: Analysis of the Insider–Outsider Concept as a Heuristic Device to Develop Reflexivity in Students doing Qualitative Research. Teaching in Higher Education, 11(4), 483-494.

Henry, K. (2015). “No One to Learn From”. Ken Henry Slams APS Collective Amnesia”. Australian Financial Review, 18 November. http://www.afr.com/news/economy/canberra-has-a-bad-case-of-amnesia-says-ken-henry-20151117-gl116h

Hill, M. & Hupe. P. (2003). The Multi-layer Problem in Implementation Research. Public Management Review 5(4), 471-490.

Hockey, J. (1993). Research Methods - Researching Peers and Familiar Settings. Research Papers in Education, 8(2), 199-225. Page | 250

Holmes, M., & Shand, D. (1995). Management Reform: some Practitioner Perspectives on the Past Ten Years. Governance, 8(4), 551-578.

Hood, C. (1991). A Public Management for All Seasons? Public Administration, 69(1), 3-19.

Hood, C. (1995). The “New Public Management” in the 1980s: Variations on a Theme Accounting, Organizations and Society, 20(2), 93-109.

Hood, C., & Dixon, R. (2015). What We Have To Show for 30 Years of New Public Management: Higher Costs, More Complaints. Governance, 28(3, 265-267.

Hood, C., & Dixon, R. (2016). Not What It Said on the Tin? Reflections on Three Decades of UK Public Management Reform. Financial Accountability & Management, 32(4), 409-428.

Hood, C., & Lodge, M. (2004). Competency, Bureaucracy and Public Management Reform: A Comparative Analysis. Governance, 17(3), 313-333).

Hood, C., & Lodge, M. (2007). Endpiece: Civil Service Reform Syndrome–Are We Heading for a Cure? Transformation, Spring, 58-59.

Horne, N. (2010). Australian Public Service Reform. Parliamentary Library Briefing Book. Australian Parliament, Canberra. At https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Librar y/pubs/BriefingBook43p/apsreform (accessed 27/3/19)

Horst, P., Nay, J. N., Scanlon, J. W., & Wholey, J. S. (1974). Program Management and the Federal Evaluator. Public Administration Review, 34(4), 300-308.

Howlett, M. (2009). Policy Analytical Capacity and Evidence‐Based Policy‐Making: Lessons from Canada. Canadian Public Administration, 52(2), 153-175.

Howlett, M. (2018). Moving Policy Implementation Theory Forward: A Multiple Streams/Critical Juncture Approach. Public Policy and Administration, On-line 25 May. https://journals.sagepub.com/doi/pdf/10.1177/0952076718775791

Howlett M., Cashore B. (2014). Conceptualizing Public Policy. In Engeli I., Allison C.R. (eds) Comparative Policy Studies. Research Methods Series. 17-33. Palgrave Macmillan, London

Hughes, M. (2016). Leading Changes: Why Transformation Explanations Fail. Leadership, 12(4), 449-469.

Hughes, O. (1992). Public Management or Public Administration? Australian Journal of Public Administration, 51(3), 286-296.

Hughes, O. (2012). Public Sector Trends in Australia. In Emerging and Potential Trends in Public Management: An Age of Austerity. Critical Perspectives on International Public Sector Management, 1, 173-193.

Page | 251

Hughes, A., Gleeson, D., Legge, D., & Lin, V. (2015). Governance and Policy Capacity in Health Development and Implementation in Australia. Policy and Society, 34(3), 229-245.

Hunter, D. E., & Nielsen, S. B. (2013). Performance Management and Evaluation: Exploring Complementarities. New Directions for Evaluation, 137, 7-17.

Hupe, P. L. (2011). The Thesis of Incongruent Implementation: Revisiting Pressman and Wildavsky. Public Policy and Administration, 26(1), 63-80.

Hupe, P. (2014). What Happens on the Ground: Persistent Issues in Implementation Research. Public Policy and Administration, 29(2), 164-182.

Hupe, P. L., & Hill, M. J. (2016). ‘And the rest is implementation.’ Comparing Approaches to What Happens in Policy Processes beyond Great Expectations. Public Policy and Administration, 31(2), 103-121.

Hupe, P., Hill, M., & Nangia, M. (2014). Studying Implementation Beyond Deficit Analysis: The Top-down View Reconsidered. Public Policy and Administration, 29(2), 145-163.

Hupe, P., & Sætren, H. (2015). Comparative Implementation Research: Directions and Dualities. Journal of Comparative Policy Analysis: Research and Practice, 17(2), 93-102.

Hyndman, N., & Liguori, M. (2016). Public Sector Reforms: Changing Contours on an NPM Landscape. Financial Accountability & Management, 32(1), 5-32.

Ilott, O., Randall, J., Bleasdale, A., Norris, E. (2016). Making Policy Stick. Tackling Long- Term Challenges in Government. Institute for Government, London. At https://www.instituteforgovernment.org.uk/publications/making-policy-stick

Industry (2015). Evaluation Strategy 2015-2019. Office of the Chief Economist. Department of Industry and Science, Canberra. At https://www.industry.gov.au/data-and-publications/department-of- industry-innovation-and-science-evaluation-strategy-2015-2019

Infrastructure (2016). Evaluation Strategy 2016-21. Department of Infrastructure and Regional Development. Canberra. At https://infrastructure.gov.au/department/publications/files/Infrastructure-evaluation-strategy.pdf

Infrastructure (2019). Departmental Organisation Chart (8 April). Department of Infrastructure and Regional Development. Canberra. At https://infrastructure.gov.au/department/about/files/org_chart.pdf

IPAA (2018). Australian Public Service Reform: Learning from the Past and Building for the Future. Submission to the Independent Review of the Australian Public Service. Institute of Public Administration Australia. Canberra. July. http://www.ipaa.org.au/ipaa-seeks-to-bridge- gap-for-the-independent-review-of-the-australian-public-service/ (accessed 15/11/18).

Isett, K. R., Head, B. W., & Van Landingham, G. (2016). Caveat Emptor: What Do We Know about Public Administration Evidence and How Do We Know It? Public Administration Review, 76(1), 20-23.

Page | 252

Ives, D. (1994). Next Steps in Public Management, Australian Journal of Public Administration, 53(3), 335-340.

Ives, D., (1995a). Human Resource Management in the Australian Public Service: Challenges and Opportunities. Public Administration and Development, 15(3), 319-334.

Ives, D. (1995b). The Selection and Development of Senior Administrators and Managers. Australian Journal of Public Administration, 54(4), 584-592.

Jansen, K. J. (2004). From Persistence to Pursuit: A Longitudinal Examination of Momentum during the Early Stages of Strategic Change. Organization Science, 15(3), 276-294.

Jansen, K. J., Shipp, A. J., & Michael, J. H. (2016). Champions, Converts, Doubters, and Defectors: The Impact of Shifting Perceptions on Momentum for Change. Personnel Psychology, 69(3), 673-707.

JAPSTC (1992). Integrated Core Competencies for the Australian Public Service. Joint APS Training Council, Public Service Commission, Canberra. (personal copy held by author)

JCPAA (2002). Review of the Accrual Budget Documentation. Report 388. Joint Committee of Public Accounts and Audit, Australian Parliament, Canberra.

JCPAA (2017). Commonwealth Performance Framework—Inquiry based on Auditor- General’s Reports 31 (2015-16), 6 (2016-17) and 58 (2016-17). Joint Committee of Public Accounts and Audit. Australian Parliament, Canberra. https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Public_Accounts_and_Audit/CPF

Johnsen, Å. (2005). What Does 25 Years of Experience Tell Us about the State of Performance Measurement in Public Policy and Management? Public Money and Management, 25(1), 9-17.

Johnston, J. (1998). Strategy, Planning, Leadership, and the Financial Management Improvement Plan: the Australian Public Service 1983 to 1996. Public Productivity & Management Review, 21(4), 352-368.

Johnston, J. (2000). The New Public Management in Australia. Administrative Theory and Praxis, 22(2), 345-358.

Johnston, J. (2004). An Australian Perspective on Global Public Administration Theory:" Westington" Influences, Antipodean Responses and Pragmatism? Administrative Theory & Praxis, 26(2), 169-184.

Jones, L. R., & Kettl, D. F. (2003). Assessing Public Management Reform in an International Context. International Public Management Review, 4(1), 1-19.

Kamener, L. (2017). How Resilient is your Reform Programme to a Change in Leadership? Centre for Public Impact, London. 22 November. https://www.centreforpublicimpact.org/resilient-reform-programme-change-leadership/#

Page | 253

Karp, T. & Helgo. T (2008). From Change management to Change leadership: Embracing Chaotic Change in Public Service Organizations. Journal of Change Management, 8(1), 85- 96.

Keast, R., & Brown, K. (2006). Adjusting to New Ways of Working: Experiments with Service Delivery in the Public Sector. Australian Journal of Public Administration, 65(4), 41-53.

Keating, M. (1989). Quo Vadis? Challenges of Public Administration. Australian Journal of Public Administration, 48(2), 123-131.

Keating, M. (1990), Managing for Results in the Public Interest. Australian Journal of Public Administration, 49(4), 387-397.

Keating, M. (1995). The Evolving Role of Central Agencies: Change and Continuity. Australian Journal of Public Administration, 54(4), 579-583.

Keating, M. (1999). The Public Service: Independence, Responsibility and Responsiveness. Australian Journal of Public Administration, 58(1), 39-47.

Keating, M., & Holmes, M. (1990). Australia's Budgetary and Financial Management Reforms. Governance, 3(2), 168-185.

Kelman, S. (2006). Downsizing, Competition, and Organizational Change in Government: Is Necessity the Mother of Invention? Journal of Policy Analysis and Management, 25(4), 875- 895.

Kemp, D. (1999). Second Reading Speech, Public Service Bill, 30 March. Minister for Education, Training and Youth Affairs; Minister Assisting the Prime Minister for the Public Service. House of Representatives, Parliament House, Canberra. At http://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id%3A%22chamber%2 Fhansardr%2F1999-03-30%2F0035%22

Kendall, J. (1999). Axial Coding and the Grounded Theory Controversy. Western Journal of Nursing Research, 21(6), 743-757.

Kennedy, M. M. (1979). Generalizing from Single Case Studies. Evaluation Review, 3(4), 661- 678.

Kernaghan, K. (2009). Speaking Truth to Academics: The Wisdom of the Practitioners. Canadian Public Administration, 52(4), 503-523.

Kettl, D. F. (1997). The Global Revolution in Public Management: Driving Themes, Missing Links. Journal of Policy Analysis and Management, 16(3), 446-462.

Knies, E., & Leisink, P. (2018). People Management in the Public Sector. In HRM in Mission Driven Organizations, 15-46. Palgrave Macmillan, Cham.

Page | 254

Kober, R., Lee, J., & Ng, J. (2010). Mind Your Accruals: Perceived Usefulness of Financial Information in the Australian Public Sector under Different Accounting Systems. Financial Accountability & Management, 26(3), 267-298.

Kotter, J., (1995). Leading Change. Why Transformational Efforts Fail. Harvard Business Review, March-April, 59-67.

Kravchuk, R. S., & Schack, R. W. (1996). Designing Effective Performance-Measurement Systems under the Government Performance and Results Act of 1993. Public Administration Review, 56(4), 348-358.

Kroll, A. (2015). Drivers of Performance Information Use: Systematic Literature Review and Directions for Future Research. Public Performance & Management Review, 38(3), 459-486.

Kroll, A., & Moynihan, D. P. (2017) The Design and Practice of Integrating Evidence: Connecting Performance Management with Program Evaluation. Public Administration Review. 78(2), 183-194.

Kuipers, B. S., Higgs, M., Kickert, W., Tummers, L., Grandia, J., & Van der Voet, J. (2014). The Management of Change in Public Organizations: A Literature Review. Public Administration, 92(1), 1-20.

Kunisch, S., Bartunek, J.M., Mueller, J., Huy, Q. N. (2017). Time in Strategic Change Research, Academy of Management Annals, 11 (2), 1005–1064.

Lancaster, G. A. (2015). Pilot and Feasibility Studies Come of Age! Pilot and Feasibility Studies, 1(1), 1-4.

Laughlin, R., & Broadbent, J. (1996). Redesigning Fourth Generation Evaluation. An Evaluation Model for the Public-Sector Reforms in the UK? Evaluation, 2(4), 431-451.

Lee, J. (2008). Preparing Performance Information in the Public Sector: An Australian Perspective. Financial Accountability & Management, 24(2), 117-149.

Lee, J., & Fisher, G. (2007). The Perceived Usefulness and Use of Performance Information in the Australian Public Sector. Accounting, Accountability & Performance, 13(1), 42.

Le Gallais, T. (2008). Wherever I Go There I Am: Reflections on Reflexivity and the Research Stance Reflective Practice, 9(2), 145-155.

Leithwood, K. A., & Montgomery, D. J. (1980). Evaluating Program Implementation Evaluation Review, 4(2), 193-214.

Lindquist, E. (2010). From Rhetoric to Blueprint: The Moran Review as a Concerted, Comprehensive and Emergent Strategy for Public Service Reform. Australian Journal of Public Administration, 69(2), 115-151.

Page | 255

Lindquist, E. & Wanna, J. (2011). Delivering Policy Reform: Making it Happen, Making it Stick. in Lindquist, E., Vincent, S., Wanna J. (Eds). Delivering Policy Reform. Anchoring Significant Reforms in Turbulent Times.1-12. Australia and New Zealand School of Government, ANU Press, Canberra.

Lindquist, E. & Wanna, J. (2015). Is Implementation Only About Policy Execution?: Advice for Public Sector Leaders from the Literature. In Wanna, J., Lindquist, E., Marshall, P. New Accountabilities, New Challenges. Australia and New Zealand School of Government, ANU Press, Canberra.

Lindquist, E., Vincent, S., Wanna J. (Eds) (2011). Delivering Policy Reform. Anchoring Significant Reforms in Turbulent Times. Australia and New Zealand School of Government Canberra, ANU E-Press. http://press.anu.edu.au/titles/australia-and-new-zealand-school-of- government-anzsog-2/delivering_citation/

Lipsky, M. (1980/2010). Street-Level Bureaucracy, 30th Ann. Ed.: Dilemmas of the Individual in Public Service. Russell Sage Foundation.

Lloyd, J (2017a). State of the Service Report, 2016-17. Commissioner’s Forward. Australian Public Service Commission, Canberra. At http://www.apsc.gov.au/__data/assets/pdf_file/0004/101200/SoSR_web.pdf (accessed 8/2/18)

Lloyd, J (2017b). Guiding Principles in Service of the Public. Australian Public Service Commissioner. Speech at Public Sector Forum, Sydney, June. Australian Public Service Commission, Canberra. At https://www.apsc.gov.au/guiding-principles-service-public

Lu, Y., & Henning, K. S. (2013). Are Statisticians Cold‐blooded Bosses? A New Perspective on the ‘Old’ Concept of Statistical Population. Teaching Statistics, 35(1), 66-71.

MAB (1992). Contracting for the Provision of Services in Commonwealth Agencies. Publication 8. Management Advisory Board, December. Australian Government Publishing Service, Canberra.

MAB (1993). Building a Better Public Service. Publication 12. Management Advisory Board. Australian Government Publishing Service, Canberra.

MAC (2004). Connecting Government. Whole of Government Responses to Australia’s Priority Challenges. Management Advisory Committee Australian Government, Canberra. At http://www.apsc.gov.au/__data/assets/pdf_file/0006/7575/connectinggovernment.pdf

MAC (2005). Senior Executive Service of the Australian Public Service. One APS - One SES. Management Advisory Committee, Canberra. At https://www.apsc.gov.au/sites/default/files/oneaps.pdf

MAC (2010). Empowering Change: Fostering Innovation in the Australian Public Service. Management Advisory Committee. Number 9, Australian Government, Canberra At https://www.apsc.gov.au/sites/default/files/empoweringchange.pdf

MacCarthaigh, M (2017). Reforming the Irish Public Service: A Multiple Streams Perspective. Administration, 65(2): 145-164.

Page | 256

Mackay, K. (1992). The Use of Evaluation in the Budget Process. Australian Journal of Public Administration, 51(4), 436-439.

Mackay, K. (1994). The Australian Government’s Evaluation Strategy: A Perspective from the Center (sic). The Canadian Journal of Program Evaluation, 9(2), 15-30.

Mackay, K. (2003). Two Generations of Performance Evaluation and Management System in Australia. Canberra Bulletin of Public Administration, (110), 9-20. http://search.informit.com.au/documentSummary;dn=200401535;res=IELAPA

Mackay, K. (2011). The Performance Framework of the Australian Government, 1987 to 2011. OECD Journal on Budgeting, 2011(3), 1-49.

Maloney, J. (2017). Evaluation: What’s the Use? Evaluation Journal of Australasia, 17(4), 25-38.

Marsh, D., & McConnell, A. (2010). Towards a Framework for Establishing Policy Success. Public Administration, 88(2), 564-583.

Mascarenhas, R. C. (1990). Reform of the Public Service in Australia and New Zealand Governance, 3(1), 75-95.

Mascarenhas, R. C. (1993). Building an Enterprise Culture in the Public Sector: Reform of the Public Sector in Australia, Britain, and New Zealand. Public Administration Review, 53(4), 319-328.

Matheson, C. (2016). Identifying and Explaining Organizational Cultures in the Public Sector: A Study of the Australian Public Service Using the Interaction Ritual Theory of Randall Collins. Administration & Society, First published 6 May. http://journals.sagepub.com/doi/abs/10.1177/0095399716647151# Accessed 13/9/18.

Matheson, C. (2017). Four Organisational Cultures in the Australian Public Service: Assessing the Validity and Plausibility of Mary Douglas’ Cultural Theory. Australian Journal of Public Administration. First published 19 December. Accessed 13/9/18.

Matland, R. E. (1995). Synthesizing the Implementation Literature: The Ambiguity-Conflict Model of Policy Implementation. Journal of Public Administration Research and Theory, 5(2), 145-174.

Matthews, J., Ryan, N., & Williams, T. (2011). Adaptive and Maladaptive Responses of Managers to Changing Environments: A Study of Australian Public Sector Senior Executives. Public Administration, 89(2), 345-360.

Mauri, A. G., & Muccio, S. (2012). The Public Management Reform: from Theory to Practice. The Role of Cultural Factors. International Journal of Advances in Management Science. 1(3), 47-56.

Mayne, J. (2001). Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly. The Canadian Journal of Program Evaluation, 16(1), 1.

Page | 257

McConnell, A. (2010). Policy Success, Policy Failure and Grey Areas In-between. Journal of Public Policy, 30(03), 345-362.

McConnell, A. (2015). What is Policy Failure? A Primer to Help Navigate the Maze. Public Policy and Administration, 30(3-4), 221-242. http://ppa.sagepub.com/content/early/2015/01/22/0952076714565416.full.pdf+html

McIntosh, M. J., & Morse, J. M. (2015). Situating and Constructing Diversity in Semi- Structured Interviews. Global Qualitative Nursing Research, (2), 2333393615597674.

McLaughlin, J., & Jordan, G. (1999), Logic Models: A Tool for Telling Your Program’s Performance Story. Evaluation and Program Planning. 22 (1), 65-72.

McPhee, I. (2007): Forward to Improving Implementation. Organisational Change and Project Management (J Wanna, ed). Australia and New Zealand School of Government, ANU Press, Canberra.

McPhee, I. (2011). The Evolving Role of the Australian National Audit Office Since Federation. Senate Occasional Lecture Series. Australian National Audit Office, Canberra. At https://www.anao.gov.au/work/speech/evolving-role-and-mandate-anao-federation

McTaggart, D., & O'Flynn, J. (2015). Public Sector Reform. Australian Journal of Public Administration, 74(1), 13-22.

Metcalfe, A. (2007). Opening Address. Corporate Governance in the Public Sector, 2007. Department of Immigration and Citizenship. Canberra.

Mento, A., Jones, R., & Dirndorfer, W. (2002). A Change Management Process: Grounded in both Theory and Practice. Journal of Change Management, 3(1), 45-59.

Miles, M. B., & Huberman, A. M. (1984). Drawing Valid Meaning from Qualitative Data: Toward a Shared Craft. Educational Researcher, 13(5), 20-30.

Milazzo, C. (1992). Annual Reports: Impediments to their Effective Use as Instruments of Accountability. Australian Journal of Public Administration, 51(1), 35-42.

Mintrom, M., & Norman, P. (2009). Policy Entrepreneurship and Policy Change. Policy Studies Journal, 37(4), 649-667.

Mishler, E. G., (1990), Validation in Inquiry-Guided Research: the Role of Exemplars in Narrative Studies. Harvard Educational Review, 60(4), 415-442.

Montague, S. (2000). Focusing on Inputs, Outputs and Outcomes: are International Approaches to Performance Management Really so Different? The Canadian Journal of Program Evaluation, 15(1), 139-148

Montjoy, R. S., & O'Toole, L. J. (1979). Toward a Theory of Policy Implementation: An Organizational Perspective. Public Administration Review, 39(5), 465-476.

Page | 258

Moore-Wilton, M. (1999). New Performance Paradigms for the Public Service. Secretary of the Department of Prime Minister and Cabinet, Canberra. http://webarchive.nla.gov.au/gov/20000119102251/http://www.pmc.gov.au/briefing/doc/moore2.html

Moran, T. (2013): Reforming to Create Value: Our Next Five Strategic Directions. Australian Journal of Public Administration, Vol. 72, No.1, pp. 1-6.

Morgan, D., & Zeffane, R. (2003). Employee Involvement, Organizational Change and Trust in Management. International Journal of Human Resource Management, 14(1), 55-75.

Morrison, A. (2014). Picking up the Pace in Public Services. Policy Quarterly, 10(2). 43-48.

Morse, J. M. (1995). The Significance of Saturation. Qualitative Health Research, 5(2), 147- 149.

Morse, J. M. (2015). Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry. Qualitative Health Research, 25(9), 1212-1222.

Morton, D., & Cook, B. (2018). Evaluators and the Enhanced Commonwealth Performance Framework. Evaluation Journal of Australasia, 18(3), 141-164.

Moynihan, D. P. (2006). Managing for Results in State Government: Evaluating a Decade of Reform. Public Administration Review, 66(1), 77-89.

Moynihan, D. P., & Ingraham, P. W. (2004). Integrative Leadership in the Public Sector: A Model of Performance-Information Use. Administration & Society, 36(4), 427-453.

Moynihan, D. P., & Kroll, A. (2015). Performance Management Routines that Work? An Early Assessment of the GPRA Modernization Act. Public Administration Review. 76(2), 314-323.

Moynihan, D. P., & Pandey, S. K. (2010). The Big Question for Performance Management: Why Do Managers use Performance Information? Journal of Public Administration Research and Theory, 20(4), 849-866.

Moynihan, D. P., Pandey, S. K., & Wright, B. E. (2012). Prosocial Values and Performance Management Theory: Linking Perceived Social Impact and Performance Information Use. Governance, 25(3), 463-483.

Mrdak, M. (2015). Are We There Yet? (Developing a Departmental Evaluation Culture). Presentation to the Canberra Evaluation Forum. 19 March. Secretary, Department of Infrastructure and Regional Development, Canberra. https://vs286790.blob.core.windows.net/docs/CEF- ResourcesPAST2015/CEF-2015-03-19-Mrdak-Are-We-There-Yet-notes.pdf

Mulgan, R. G. (1995). Academics, Public Servants and Managerialism. Canberra Bulletin of Public Administration, (78), 6-12. Institute of Public Administration Australia, Canberra. At https://search.informit.com.au/fullText;dn=960706730;res=IELAPA (accessed 14/8/19)

Mulgan, R. (1998). Politicisation of Senior Appointments in the Australian Public Service. Australian Journal of Public Administration, 57(3), 3-14.

Page | 259

Mulgan, R. (2008). How Much Responsiveness is too Much or too Little? Australian Journal of Public Administration, 67(3), 345-356.

Mulgan, R. (2010). Where Have all the Ministers Gone? Australian Journal of Public Administration, 69(3), 289-300.

Müller, J., & Kunisch, S. (2018). Central Perspectives and Debates in Strategic Change Research. International Journal of Management Reviews, 20(2), 457-482.

NLA (2017). Former Management Advisory Board/Management Improvement Advisory Committee (MAB/MIAC) Publication Series, 1991-1996. Management Advisory Board, Australian Government Publishing Service. National Library of Australia. Canberra. https://catalogue.nla.gov.au/Record/2084231 (accessed 9 January 2017)

Newcomer, K., & Brass, C. T. (2016). Forging a Strategic and Comprehensive Approach to Evaluation within Public and Nonprofit Organizations: Integrating Measurement and Analytics within Evaluation. American Journal of Evaluation, 37(1), 80-99.

Newcomer, K., & Caudle, S. (2011). Public Performance Management Systems: Embedding Practices for Improved Success. Public Performance & Management Review, 35(1), 108-132.

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the Twain Shall Meet? a Comparison of Implementation Science and Policy Implementation Research. Implementation Science, 8(1), 63.

Noor, K. B. M. (2008). Case Study: A Strategic Research Methodology. American Journal of Applied Sciences, 5(11), 1602-1604.

Noy, C. (2008). Sampling Knowledge: The Hermeneutics of Snowball Sampling in Qualitative Research. International Journal of Social Research Methodology, 11(4), 327-344.

Oakland, J. S., & Tanner, S. (2007). Successful Change Management. Total Quality Management & Business Excellence, 18(1-2), 1-19.

O’Faircheallaigh & Ryan (1992). Introduction. In O’Faircheallaigh, C. & Ryan, B (eds). Program Evaluation and Performance Monitoring. Centre for Australian Public Sector Management, Macmillan, Melbourne.

O’Flynn, J. (2011). Someone started a rumour ! What do we actually know about the capacity of the Australian Public Service. In Perspectives on the Capacity of the Australian Public Service and Effective Policy Development and Implementation. Australian Journal of Public Administration, 70(3), 309-317.

O’Flynn, J. (2015). Public Sector Reform: The Puzzle We Can Never Solve? in McTaggart, D., & O'Flynn, J. Public Sector Reform. Australian Journal of Public Administration, 74(1), 13-22.

Orazi, D. C., Turrini, A., & Valotti, G. (2013). Public Sector Leadership: New Perspectives for Research and Practice. International Review of Administrative Sciences, 79(3), 486-504.

Page | 260

Osborne, S. P. (2017). Public Management Research over the Decades: What are we Writing About? Public Management Review, 19(2), 109-113.

Osborne, S. P., & Brown, K. (2005). Managing Change and Innovation in Public Service Organizations. Routledge, Abingdon. = #5

O'Toole, L. J. (2000). Research on Policy Implementation: Assessment and Prospects. Journal of Public Administration Research and Theory, 10(2): 263-288.

O'Toole, L. J. (2004). The Theory–Practice Issue in Policy Implementation Research. Public Administration, 82(2), 309-329.

O’Toole Jr, L. J., & Meier, K. J. (2014). Public Management, Context, and Performance: In Quest of a More General Theory. Journal of Public Administration Research and Theory, 25(1), 237-256.

Page, C. & Ayres, R., (2018). Policy Logic: Creating Policy and Evaluation Capital in your Organisation. Evaluation Journal of Australasia, 18(1) 45–63.

Pal, L. A., & Clark, I. D. (2015). Making Reform Stick: Political Acumen as an Element of Political Capacity for Policy Change and Innovation. Policy and Society, 34(3), 247-257.

Palys, T. (2008). Purposive Sampling. The Sage Encyclopedia of Qualitative Research Methods, 2(1), 697-8.

Panchamia, N., & Thomas, P. (2014). Capability Reviews. Institute for Government, London. https://www.instituteforgovernment.org.uk/sites/default/files/case%20study%20capabilities.pdf

Parker, R. S. (1976). The Coombs Inquiry and the Prospects for Action. Australian Journal of Public Administration, 35(4), 311-319.

Parker, L. D., & Guthrie, J. (1993). The Australian Public Sector in the 1990s: New Accountability Regimes in Motion. Journal of International Accounting, Auditing and Taxation, 2(1), 59-81.

Parkinson, M (2016). Address to the Australasian Implementation Conference, Melbourne. 6 October. Secretary, Department of Prime Minister and Cabinet, Canberra. https://www.pmc.gov.au/news-centre/pmc/address-australasian-implementation-conference

Perrin, B. (2015). Bringing Accountability up to Date with the Realities of Public Sector Management in the 21st Century. Canadian Public Administration, 58(1), 183-203.

Peters, B. G., Rhodes, R. A. W., & Wright, V. (Eds.). (2000). Administering the Summit: Administration of the Core Executive in Developed Countries. Basingstoke: Macmillan.

Peters, B. G., & Pierre, J. (2017). Two Roads to Nowhere: Appraising 30 Years of Public Administration Research. Governance. 30(1), 11-16.

Pettigrew, A. M. (1990). Longitudinal Field Research on Change: Theory and Practice Organization Science, 1(3), 267-292. Page | 261

Pettigrew, A. M., Woodman, R. W., & Cameron, K. S. (2001). Studying Organizational Change and Development: Challenges for Future Research. Academy of Management Journal, 44(4), 697-713.

PGPA (2018). Public Governance, Performance and Accountability Act 2013. Federal Register of Legislation. At www.legislation.gov.au/Details/C2017C00269

Phillimore, J. (2013). Understanding Intergovernmental Relations: Key Features and Trends Australian Journal of Public Administration, 72(3), 228-238.

Pick, D., & Teo, S. T. (2017). Job Satisfaction of Public Sector Middle Managers in the Process of NPM Change. Public Management Review, 19(5), 705-724.

Ployhart, R. E., & Vandenberg, R. J. (2010). Longitudinal Research: The Theory, Design, and Analysis of Change. Journal of Management, 36(1), 94-120.

Podger, A. S. (2004). Innovation with Integrity—the Public Sector Leadership Imperative to 2020. Australian Journal of Public Administration, 63(1), 11-21.

Podger, A. (2005). Regeneration‐Where to in the Future? Australian Journal of Public Administration, 64(2), 13-19.

Podger, A. (2007). What Really Happens: Department Secretary Appointments, Contracts and Performance Pay in the Australian Public Service. Australian Journal of Public Administration, 66(2), 131-147.

Podger, A. (2009). The Role of Departmental Secretaries: Personal Reflections on the Breadth of Responsibilities Today. Australia New Zealand School of Government, Canberra. http://press.anu.edu.au/publications/series/australia-and-new-zealand-school-government-anzsog/role- departmental-secretaries

Podger, A. (2013). Mostly Welcome, but are the Politicians Fully Aware of What They have Done? The Public Service Amendment Act 2013. Australian Journal of Public Administration, 72(2), 77-81.

Podger, A. (2018). Making ‘Accountability for Results’ Really Work ? in Podger, A., Su T- T., Wanna, J., Chan, H.S., Nui, M. (eds) (2018). Value for Money. Budget and Financial Management Reform in The People’s Republic of China, Taiwan and Australia. 95-126. Australia and New Zealand School of Government, ANU Press, Canberra.

Poland, O. F. (1974). Program Evaluation and Administrative Theory. Public Administration Review, 34(4), 333-338.

Pollack, J., & Pollack, R. (2015). Using Kotter’s Eight Stage Process to Manage an Organisational Change Program: Presentation and Practice. Systemic Practice and Action Research, 28(1), 51-66.

Pollitt, C. (1995). Justification by Works or by Faith? Evaluating the New Public Management Evaluation, 1(2), 133-154.

Page | 262

Pollitt, C. (2000). Institutional Amnesia: A Paradox of the Information Age? Prometheus, 18(1), 5-16.

Pollitt, C. (2001). Clarifying Convergence. Striking Similarities and Durable Differences in Public Management Reform. Public Management Review, 3(4), 471-492.

Pollitt, C. (2006). Academic Advice to Practitioners—What is its Nature, Place and Value within Academia? Public Money and Management, 26(4), 257-264.

Pollitt, C. (2009a). Structural Change and Public Service Performance: International Lessons? Public Money & Management, 29(5), 285-291.

Pollitt, C. (2009b). Bureaucracies Remember, Post‐Bureaucratic Organizations Forget? Public Administration, 87(2), 198-218.

Pollitt, C. (2013). 40 Years of Public Management Reform in UK Central Government – Promises, Promises... Policy & Politics, 41(4), 465-480.

Pollitt, C. (2017). Public Administration Research since 1980: Slipping Away from the Real World? International Journal of Public Sector Management, 30(6-7), 555-565.

Pollitt, C. (2018). Performance Management 40 years on: a Review. Some Key Decisions and Consequences. Public Money & Management, 38(3), 167-174.

Pollitt, C., & Bouckaert, G. (2003). Evaluating Public Management Reforms: An International Perspective. Evaluation in Public-Sector Reform. Concepts and Practice in International Perspective. Cheltenham, 12-35.

Pollitt, C., & Bouckaert, G. (2011). Public Management Reform: A Comparative Analysis: New Public Management, Governance, and the Neo-Weberian State. Oxford University Press.

Pollitt, C., & Dan, S. (2013). Searching for Impacts in Performance-Oriented Management Reform. Public Performance & Management Review, 37(1), 7-32.

Prasser, S. (2004). Poor Decisions, Compliant Management and Reactive Change: The Public Sector in 2003. Australian Journal of Public Administration, 63(1), 94-103.

Prasser, S. (2006). Royal Commissions in Australia: When Should Governments Appoint Them? Australian Journal of Public Administration, 65(3), 28-47.

Pressman, J. L., & Wildavsky, A. B. (1973, 1st Edition,). Implementation: University of California Press, Berkeley.

PSAa. Public Service Act, 1999. S.64(2)(a). Federal Register of Legislation. At https://www.legislation.gov.au/Details/C2019C00057

PSAb. Public Service Act, 1999. s.57(1). Federal Register of Legislation. At https://www.legislation.gov.au/Details/C2019C00057

Page | 263

PSAc. Public Service Act, 1999. S.57(2). Federal Register of Legislation. At https://www.legislation.gov.au/Details/C2019C00057

PSAd. Public Service Act, 1999. S.3(a). Federal Register of Legislation. At https://www.legislation.gov.au/Details/C2019C00057

PSC, Qld (n.d.) Change Management Best Practices Guide. Five (5) Key Factors Common to Success in Managing Organisational Change. Public Service Commission Queensland. http://www.psc.qld.gov.au/publications/subject-specific-publications/assets/change- management-best-practice-guide.pdf

PSC, WA (2012). Structural Change Management. A Guide to Assist Agencies to Manage Change. Public Service Commission Western Australia. https://publicsector.wa.gov.au/public-administration/structural-change-management

Radin, B. A. (2000). The Government Performance and Results Act and the Tradition of Federal Management Reform: Square Pegs in Round Holes? Journal of Public Administration Research and Theory, 10(1), 111-135.

Radin, B. A. (2003). A Comparative Approach to Performance Management: Contrasting the Experience of Australia, New Zealand, and the United States. International Journal of Public Administration, 26(12), 1355-1376.

Radin, B. A. (2013). Reclaiming our Past: Linking Theory and Practice. PS: Political Science & Politics, 46(1), 1-7.

Radin, B. A. (2016). Policy Analysis and Advising Decisionmakers: Don’t Forget the Decisionmaker/Client. Journal of Comparative Policy Analysis: Research and Practice, 18(3), 290-301.

Rafferty, A. E., Jimmieson, N. L., & Armenakis, A. A. (2013). Change Readiness: A Multilevel Review. Journal of Management, 39(1), 110-135.

Rana, T., Hoque, Z., & Jacobs, K. (2019). Public Rector Reform. Implications for Performance Measurement and Risk Management Practice: Insights from Australia. Public Money & Management, 39(1), 37-45.

RCAGA (1976), Royal Commission on Australian Government Administration. Australian Government Publishing Service, Canberra. = #1 At https://apo.org.au/sites/default/files/resource-files/1976/08/apo-nid34221-1236056.pdf

Reed, M. I. (2009). The Theory/Practice Gap: a Problem for Research in Business Schools? Journal of Management Development, 28(8), 685-693.

Rhodes, R. A. (2016). Recovering the Craft of Public Administration. Public Administration Review. 76(4), 638–647.

Rhodes, R. A. W., & Tiernan, A. (2015). Executive Governance and its Puzzles. in International Handbook of Public Administration and Governance. Chelmsford: Edward Elgar, 81-103. Page | 264

Rhodes, R. A. W., Wanna, J., & Weller, P. (2008). Reinventing Westminster: How Public Executives Reframe their World. Policy & Politics, 36(4), 461-479.

Roberts, A. (2017). The Aims of Public Administration: Reviving the Classical View. Perspectives on Public Management and Governance. 1(1), 73-85

Rogers, P. (2008). Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions. Evaluation, 14(1), 29-48.

Rouleau, L. (2005). Micro‐Practices of Strategic Sensemaking and Sensegiving: How Middle Managers Interpret and Sell Change Every Day. Journal of Management Studies, 42(7), 1413- 1441

Rouleau, L., & Balogun, J. (2011). Middle Managers, Strategic Sensemaking, and Discursive Competence. Journal of Management Studies, 48(5), 953-983.

Rowley, J. (2002). Using Case Studies in Research. Management Research News, 25 (1), 16-27.

Rusaw, C. A. (2007). Changing Public Organizations: Four Approaches. International Journal of Public Administration, 30(3), 347-361.

Ryan, B. (2004). 17. Measuring and Managing for Performance: Lessons from Australia. In Strategies for Public Management Reform. Research in Public Policy Analysis and Management, 13, 415-449.

Ryan, N. (1995). Unravelling Conceptual Developments in Implementation Analysis Australian Journal of Public Administration, 54(1), 65-80.

Ryan, N., Williams, T., Charles, M., & Waterhouse, J. (2008). Top-down Organizational Change in an Australian Government Agency. International Journal of Public Sector Management, 21(1), 26-44.

Sabatier, P. A. (1986). Top-down and Bottom-up Approaches to Implementation Research: a Critical Analysis and Suggested Synthesis. Journal of Public Policy, 6(1), 21-48.

Sabatier, P., & Mazmanian, D. (1979). The Conditions of Effective Implementation: A Guide to Accomplishing Policy Objectives. Policy Analysis, 5(4), 481-504.

Sabatier, P., & Mazmanian, D. (1980). The Implementation of Public Policy: A Framework of Analysis. Policy Studies Journal, 8(4), 538-560.

Saetren, H. (2005). Facts and Myths about Research on Public Policy Implementation: Out‐of‐ Fashion, Allegedly Dead, but Still Very Much Alive and Relevant. Policy Studies Journal, 33(4), 559-582.

Saetren, H. (2014). Implementing the Third Generation Research Paradigm in Policy Implementation Research: An Empirical Assessment. Public Policy and Administration, 29(2), 84-105.

Page | 265

Sætren, H., & Hupe, P. L. (2018). Policy Implementation in an Age of Governance. In The Palgrave Handbook of Public Administration and Management in Europe, 553-575. Palgrave Macmillan, London.

Saldana, J. (2009). The Coding Manual for Qualitative Researchers. Sage Publications, Thousand Oaks.

Sampson, H. (2004). Navigating the Waves: the Usefulness of a Pilot in Qualitative Research. Qualitative Research, 4 (3), 383-402.

Sanderson, I. (2002). Evaluation, Policy Learning and Evidence‐Based Policy Making. Public Administration, 80(1), 1-22.

Sasse, T. & Norris, E. (2019). Moving On. The Costs of High Staff Turnover in the Civil Service. Institute for Government, London. https://www.instituteforgovernment.org.uk/publications/moving-on-staff-turnover-civil-service

SCFPA (1990). Not Dollars Alone. Review of the Financial Management Improvement Program. Standing Committee on Finance and Public Administration, House of Representatives, Australian Parliament House, Canberra.

SCFPA (2007). Transparency and Accountability of Commonwealth Public Funding and Expenditure. Standing Committee on Finance and Public Administration. The Senate, Australian Parliament, Canberra.

Scheirer, M., A. (1987). Program Theory and Implementation Theory: Implications for Evaluators. New Directions for Program Evaluation, 1987 (33), 59-76.

Scheirer, M. A., (2012). Planning Evaluation through the Program Life Cycle. American Journal of Evaluation, 33(2), 263-294.

Sedgwick, S. (1994). Evaluation of Management Reforms in the Australian Public Service. Secretary of Department of Finance. Australian Journal of Public Administration, 53(3), 341-347.

Sedgwick, S. (2011a). Evaluation and Australian Public Service Reform. Presentation to the Canberra Evaluation Forum, 17 February. Australian Public Service Commission, Canberra. https://vs286790.blob.core.windows.net/docs/CEF-ResourcesPAST2015/CEF-2011-02-17- Australian%20Public%20Service%20Commissioner%20-%20Speech.pdf

Sedgwick, S. (2011b). The Agenda for Achieving a World-Class Public Sector: Making Reforms that Matter in the Face of Challenges. in Lindquist, E., Vincent, S., Wanna J. (Eds). Delivering Policy Reform. Anchoring Significant Reforms in Turbulent Times.75-89. Australia and New Zealand School of Government, ANU Press, Canberra.

Shannon, E. A. (2017). Beyond Public Sector Reform–The Persistence of Change. Australian Journal of Public Administration, 76(4), 470-479.

Page | 266

Shergold, P. (2015). Learning from Failure: Why Large Government Policy Initiatives Have Gone So Badly Wrong in the Past and How the Chances of Success in the Future Can Be Improved. Sourced from Australian Public Service Commission, Canberra (7/2/18). http://www.apsc.gov.au/publications-and-media/current-publications/learning-from-failure

Skinner, D. (2004). Evaluation and Change Management: Rhetoric and Reality. Human Resource Management Journal, 14(3), 5-19.

Sminia, H. (2016). Pioneering Process Research: Andrew Pettigrew's Contribution to Management Scholarship, 1962–2014. International Journal of Management Reviews, 18(2), 111-132.

Smith, T. B. (1973). The Policy Implementation Process. Policy Sciences, 4(2), 197-209.

Smith, P. (1995). On the Unintended Consequences of Publishing Performance Data in the Public Sector. International Journal of Public Administration, 18(2-3), 277-310.

Southern, W. (2014). Building Evaluation Capacity in the Department of Immigration and Border Protection. Presentation by the Deputy Secretary to the Canberra Evaluation Forum. 16 October. Canberra. https://vs286790.blob.core.windows.net/docs/CEF-ResourcesPAST2015/CEF-2014- 10-16-DIBP_Evaluation_capacity_building.pdf

Stack, R., Leal, N., Stamp, S., Reveruzzi, B., Middlin, K., Lennon, A. (2018). Complex Evaluations in the Political Context: Engaging Stakeholders in Evaluation Design. Evaluation Journal of Australasia, 18(2), 122-131.

Stake, R.E., (1978). The Case Study Method in Social Inquiry. Educational Researcher, 7(2), 5-8.

Stake, R.E (1994). Case Studies: Ch. 4. In Denzin, N. K., & Lincoln, Y. S (1994). Handbook of Qualitative Research. Sage Publications, Inc.: 236-247

Stark,A., & Head, B (2018). Institutional Amnesia and Public Policy. Journal of European Public Policy: 1-19. On-line 22 October at https://doi.org/10.1080/13501763.2018.1535612

Stenbacka, C. (2001). Qualitative Research Requires Quality Concepts of its Own. Management Decision, 39(7), 551-556.

Stewart, J., & Kimber, M. (1996). The Transformation of Bureaucracy? Structural Change in the Commonwealth Public Service 1983‐93. Australian Journal of Public Administration 55(3), 37-48.

Stewart, J., & Kringas, P. (2003). Change Management—Strategy and Values in Six Agencies from the Australian Public Service. Public Administration Review, 63(6), 675-688.

Strauss, A. & Corbin, J., (1990). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. 2nd Edition. Sage Publications. Thousand Oaks, California.

Talbot, C., & Wiggan, J. (2010). The Public Value of the National Audit Office. International Journal of Public Sector Management, 23(1), 54-70. Page | 267

Tanner, L. (2008). Operation Sunlight. Enhancing Budget Transparency. Minister for Finance and Deregulation. Department of Finance, Canberra. At https://www.finance.gov.au/sites/default/files/operation-sunlight-enhancing-budget-transparency.pdf

Taylor, J. (1992). Public Accountability Requirements. Australian Journal of Public Administration, 51(4), 455-460.

Taylor, J. (2006). Performance Measurement in Australian and Hong Kong Government Departments. Public Performance & Management Review, 29(3), 334-357.

Taylor, J. (2009). Strengthening the Link between Performance Measurement and Decision Making Public Administration, 87(4), 853-871.

Tellis, W. M. (1997). Application of a Case Study Methodology. The Qualitative Report, 3(3), 1-19.

TFMI (1992). The Australian Public Service Reformed – An Evaluation of a Decade of Management Reform. Task Force on Management Improvement, AGPS. Canberra.

’t Hart, P. (2010). Lifting its Game to Get Ahead: The Canberra Bureaucracy’s Reform by Stealth. Australian Review of Public Affairs, July. http://www.australianreview.net/digest/2010/07/thart.html?utm_source=National+Conference+eList&utm_cam paign=1eb2187ba9-National_Conference_Email4&utm_medium=email

‘t Hart, P. (2011). Epilogue: Rules for Reformers. in Lindquist, E., Vincent, S., Wanna J. (Eds). Delivering Policy Reform. Anchoring Significant Reforms in Turbulent Times. 1-12. Australia and New Zealand School of Government, ANU Press, Canberra.

Thodey, D. (2019). Priorities for Change. Chair, Independent Review of the APS. Department of Prime Minister and Cabinet, Canberra. March. At https://www.apsreview.gov.au/sites/default/files/resources/aps-review-priorities-change.pdf

Thomas, P. G. (2006). Performance Measurement, Reporting, Obstacles and Accountability: Recent Trends and Future Directions. Canberra, ANU E Press. http://press.anu.edu.au/titles/australia-and-new-zealand-school-of-government-anzsog-2/performance- measurement-reporting-obstacles-and-accountability/

Thomas, P. G. (2009). Parliament Scrutiny of Government Performance in Australia. Australian Journal of Public Administration, 68(4), 373-398.

Tiernan, A. (2006). Advising Howard: Interpreting Changes in Advisory and Support Structures for the Prime Minister of Australia. Australian Journal of Political Science, 41(3), 309-324.

Tiernan, A. (2007). Building Capacity for Policy Implementation. in Improving Implementation. Organisational Change and Project Management (Wanna, J., Ed). Australia and New Zealand School of Government, ANU Press, Canberra.

Page | 268

Tiernan, A. (2011). Advising Australian Federal Governments: Assessing the Evolving Capacity and Role of the Australian Public Service. Australian Journal of Public Administration, 70(4), 335-346.

Tiernan, A. (2015a). The Dilemmas of Organisational Capacity. Policy and Society, 34(3), 209-217.

Tiernan, A. (2015b). Craft and Capacity in the Public Service. Australian Journal of Public Administration, 74(1), 53-62.

Tiernan, A. (2015c). Reforming Australia's Federal Framework: Priorities and Prospects. Australian Journal of Public Administration, 74(4), 398-405.

Tiernan, A. (2016). Political Amnesia – Correspondence. Quarterly Essay, Issue 61. 85-88. Black Inc, Melbourne.

Tingle, L. (2015). Political Amnesia. How We Forgot to Govern. Quarterly Essay, Melbourne, 60, 1-84.

Tune, D. (2010). Evaluation: Renewed Strategic Emphasis. Presentation to the Canberra Evaluation Forum by David Tune Secretary, Department of Finance and Deregulation, Canberra. August. http://www.finance.gov.au/sites/default/files/speaking-notes-for-David- Tune-presentation-18-08-2010.pdf

Turnbull, M. (2018). Review of the Australian Public Service. (then) Prime Minister of Australia. Canberra, 4 May. https://pmtranscripts.pmc.gov.au/release/transcript-41613

van der Meer, F-B., Reichard, C., Ringeling, A. (2016). Becoming a Student of Reform. In Theory and Practice of Public Sector Reform, Van de Walle, S, & Groeneveld, S., (eds) Taylor & Francis Group, Florence.265-283.

van der Voet, J. (2014). The Effectiveness and Specificity of Change Management in a Public Organization: Transformational Leadership and a Bureaucratic Organizational Structure. European Management Journal, 32(3), 373-382.

van der Voet, J. (2016). Change Leadership and Public Sector Organizational Change: Examining the Interactions of Transformational Leadership Style and Red Tape. The American Review of Public Administration, 46(6), 660-682.

van der Voet, J., Kuipers, B. S., & Groeneveld, S. (2016a). Implementing Change in Public Organizations: The Relationship between Leadership and Affective Commitment to Change in a Public Sector Context. Public Management Review, 18(6), 842-865.

van der Voet, J., Kuipers, B., & Groeneveld, S (2016b). A Change Management Perspective, in Van de Walle, S., Groeneveld, S (eds). Theory and Practice of Public Sector Reform. Routledge, New York. 79-99.

Van Meter, D. &. Van Horn, C., (1975). The Policy Implementation Process. A Conceptual Framework. Administration & Society, 6(4), 445-488.

Page | 269

Verspaandonk, R., Holland, I., Horne, N. (2010). Chronology of Changes in the Australian Public Service 1975-2010. , Canberra. At http://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BN/1011/APSChanges

Wanna, J. (2006). From Afterthought to Afterburner: Australia’s Cabinet Implementation Unit. Journal of Comparative Policy Analysis, 8(4), 347-369.

Wanna, J (2010). Issues and Agendas for the Term. In The Rudd Government » Australian Commonwealth Administration 2007–2010. Australia New Zealand School of Government, Canberra, 17-31. http://press.anu.edu.au/node/399/download

Wanna, J. (2015). Through a Glass Darkly: the Vicissitudes of Budgetary Reform. In Wanna, J. Lindquist, E., Marshall, P. (eds). New Accountabilities, New Challenges. Australia and New Zealand School of Government, ANU Press, Canberra, 159-18.

Wanna, J. (2018). Government Budgeting and the Quest for Value-for-Money Outcomes in Australia.in Podger, A., Su T-T., Wanna, J., Chan, H.S., Nui, M. (eds) (2018). Value for Money. Budget and Financial Management Reform in The People’s Republic of China, Taiwan and Australia. 17-41. Australia and New Zealand School of Government, ANU Press, Canberra.

Wanna, J (Ed) (2007). Improving Implementation. Organisational Change and Project Management. Australia and New Zealand School of Government, ANU Press, Canberra.

Wanna, J., Kelly, J., Forster, J (2000). Managing Public Expenditure in Australia. Allan & Unwin, St Leonards.

Wanna, J., & Weller, P. (2003). Traditions of Australian Governance. Public Administration, 81(1), 63-94.

Waterford, J. (1991). A Bottom Line on Public Service Accountability. Australian Journal of Public Administration, 50(3), 414-417. = CH 7

Weiss, C. H. (1999). The Interface between Evaluation and Public Policy. Evaluation, 5(4), 468-486.

Weller, P. (1983). Do Prime Ministers’ Departments Really Create Problems? Public Administration, 61(1), 59-78.

Weller, P (1993). Reforming the Public Service: What Has been Achieved and How Can It Be Evaluated ? in Weller, P., Forster, J., David, G. (Eds) (1993). Reforming the Public Service. Lessons from Recent Experience. Centre for Australian Public Sector Management, Brisbane. 221-236.

Weller, P. (2001). Australia's Mandarins: The Frank and the Fearless? Allen & Unwin, Crows Nest.

Wettenhall, R. (2011). Organisational Amnesia: a Serious Public Sector Reform Issue International Journal of Public Sector Management, 24(1), 80-96.

Page | 270

Wettenhall, R. (2013): A Critique of the “Administrative Reform Industry”: Reform is Important, but so is Stability. Teaching Public Administration. 32(2) 149-164

Wholey, J. S. (1987). Evaluability Assessment: Developing Program Theory. New Directions for Program Evaluation, 1987(33), 77-92.

Wholey, Joseph S. (2001). Managing for Results: Roles for Evaluators in a New Management Era. The American Journal of Evaluation 22 (3), 343-347.

Wilenski, P. (1979). Political Problems of Administrative Responsibility and Reform Australian Journal of Public Administration, 38(4), 347-360.

Wilenski, P. (1986). Administrative Reform – General Principles and the Australian Experience. Public Administration, 64(3), 257-276.

Wilkins, P., Coulson, K., Phillimore, J. (2018). Central Agencies: Part of the Problem, Part of the Solution? The MANDARIN. 19 June. https://www.themandarin.com.au/94547-central-agencies-part- of-the-problem-part-of-the-solution/?utm_campaign=TheJuice&utm_medium=email&utm_source=newsletter

Wolman, H. (1981). The Determinants of Program Success and Failure. Journal of Public Policy, 1(04), 433-464.

Wollmann, H. (Ed.). (2003). Evaluation in Public-Sector Reform: Concepts and Practice in International Perspective. Edward Elgar Publishing. Cheltenham.

Yeatman, A. (1987). The Concept of Public Management and the Australian State in the 1980s. Australian Journal of Public Administration, 44(4), 339-356.

Yeatman, A. (2009). Public Bureaucracy and ‘Customer Service’: The Case of Centrelink 1996–2004. In Individualization and the Delivery of Welfare Services (119-140). Palgrave Macmillan, London.

Yeend, G. J. (1979). The Department of the Prime Minister and Cabinet in Perspective. Australian Journal of Public Administration, 38(2), 133-150.

Yin, R.K., (2014). Case Study Research. Design and Methods. (5th edition). SAGE, California.