An Investigation of the Teaching Behaviours of Effective Secondary Mathematics Teachers in ,

Sheikh Asadullah

A thesis in fulfilment of the requirements for the degree of Doctor of Philosophy

School of Faculty of Arts and Social Sciences of New South Wales

April 2017 ii

THE UNIVERSITY OF NEW SOUTH WALES Thesis/Dissertation Sheet Surname or Family name: Asadullah

First name: Sheikh

Abbreviation for degree as given in the University calendar: PhD School: School of Education Faculty: Faculty of Arts and Social Sciences

Title: An Investigation of the Teaching Behaviours of Effective Secondary Mathematics Teachers in Dhaka, Bangladesh.

Abstract: Education is a high priority for Government in Bangladesh. Despite significant progress in access, equity and public examination success, poor student performance in English and mathematics in secondary schools has become a major concern for government, education practitioners and the public in Bangladesh.

A substantial body of international research has emphasised the important contribution of teacher instructional practices to student achievement. However, this has not been investigated in Bangladesh. The purpose of this study was twofold, first, to identify the 20 highest performing secondary schools in mathematics in Dhaka, Bangladesh, and second, to investigate the teaching practices of mathematics teachers in these schools.

A two-phase mixed method approach was adopted to accomplish the study purpose. In the first phase, secondary source data were obtained from Bangladesh Education authorities and value-added measures used to identify the highest performing secondary schools. In the second phase, a concurrent mixed method design, where qualitative methods were embedded within a dominant quantitative approach was utilised. A purposive sampling strategy was used to select fifteen teachers from the twenty highest performing secondary schools. The main sources of data were teacher observations, student ratings of teaching behaviours and interviews. The data from teacher observations and student ratings were analysed with descriptive and nonparametric statistics, and confirmatory factor analysis respectively. The interview data were analysed qualitatively.

The main findings showed teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher-student interaction that creates an individualistic learning environment. The variation in developmental levels of teaching skill indicate teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively.

This is the first study to investigate teaching behaviours of effective secondary mathematics teachers within Dhaka, Bangladesh. It contributes an international dimension to the field of educational effectiveness and raise questions about existing constructivist approaches. Further, it contributes important insights about teaching behaviours that can be used to inform the development of evidence-based policy and practice on quality teaching in Bangladesh.

Declaration relating to disposition of project thesis/dissertation

I hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all property rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.

I also authorise University Microfilms to use the 350-word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral theses only).

11/04/2017 ………………………………………… ……………………………………..……………… ……….………………….. Sheikh Asadullah Nahid Hossain Date Author Witness The University recognises that there may be exceptional circumstances requiring restrictions on copying or conditions on use. Requests for restriction for a period of up to 2 years must be made in writing. Requests for a longer period of restriction may be considered in exceptional circumstances and require the approval of the Dean of Graduate Research. FOR OFFICE USE ONLY Date of completion of requirements for Award:

iii

ORIGINALITY STATEMENT

‘I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at UNSW or any other , except where due acknowledgement is made in the thesis. Any contribution made to the research by others, with whom I have worked at UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project's design and conception or in style, presentation and linguistic expression is acknowledged.’

Signed Sheikh Asadullah

Date 11/04/2017

iv

COPYRIGHT STATEMENT

‘I hereby grant the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright

Act 1968. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.

I also authorise University Microfilms to use the 350-word abstract of my thesis in Dissertation Abstract International (this is applicable to doctoral theses only).

I have either used no substantial portions of copyright material in my thesis or I have obtained permission to use copyright material; where permission has not been granted I have applied/will apply for a partial restriction of the digital copy of my thesis or dissertation.'

Signed Sheikh Asadullah

Date 11/04/2017

v

AUTHENTICITY STATEMENT

‘I certify that the Library deposit digital copy is a direct equivalent of the final officially approved version of my thesis. No emendation of content has occurred and if there are any minor variations in formatting, they are the result of the conversion to digital format.’

Signed Sheikh Asadullah

Date 11/04/2017

vi

ABSTRACT

Education is a high priority for the Government of Bangladesh. Despite significant progress in access, equity and public examination success, poor student performance in English and mathematics in secondary schools has become a major concern for government, education practitioners and the public in Bangladesh.

A substantial body of international research has emphasised the important contribution of teacher instructional practices to student achievement. However, this has not been investigated in Bangladesh. The purpose of this study was twofold, first, to identify the 20 highest performing secondary schools in mathematics in Dhaka,

Bangladesh, and second, to investigate the teaching practices of mathematics teachers in these schools.

A two-phase mixed method approach was adopted to accomplish the study purpose. In the first phase, secondary source data were obtained from Bangladesh

Education authorities and value-added measures used to identify the highest performing secondary schools. In the second phase, a concurrent mixed method design, where qualitative methods were embedded within a dominant quantitative approach was utilised. A purposive sampling strategy was used to select fifteen teachers from the twenty highest performing secondary schools. The main sources of data were teacher observations, student ratings of teaching behaviours and interviews. The data from teacher observations and student ratings were analysed with descriptive and nonparametric statistics, and confirmatory factor analysis respectively. The interview data were analysed qualitatively.

The main findings showed teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher- student interaction that creates an individualistic learning environment. The variation in

vii

developmental levels of teaching skill indicate teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively.

This is the first study to investigate teaching behaviours of effective secondary mathematics teachers within Dhaka, Bangladesh. It contributes an international dimension to the field of educational effectiveness and raises questions about existing emphasis on constructivist approaches. Further, it contributes important insights about teaching behaviours that can be used to inform the development of evidence-based policy and practice on quality teaching in Bangladesh.

viii

ACKNOWLEDGEMENTS

Firstly, I would like to express all praises to Almighty ALLAH alone, the

Omnipotent, the Omnipresent, the Most Merciful and Most Compassionate and His

Holy Prophet, MUHAMMAD (Peace be Upon Him) the most Perfect and Exalted among and of even born on surface of earth, who is forever torch of guidance and knowledge for humanity as a whole.

My special appreciation and deepest gratitude to my supervisor, Dr Kerry Barnett, for her unwavering support, patience, and collegiality throughout this project. I owe a debt to my co-supervisor, Professor Paul Ayres for his invaluable suggestions and insightful comments.

I am hugely appreciative to the Australian Government for provision of financial support through the Endeavour Postgraduate Scholarship Scheme. I am also grateful to the Government of Bangladesh for allowing me to study in Australia for five years.

I take this opportunity to express thanks to Professor Taslima Begum and Mr.

Monjurul Kabir of BISE (Dhaka), and all the participant teachers and students for supporting me in collecting data for the study. Particular thanks go to Nipu for helping me greatly in collecting the second phase data.

Special mention goes to Fazle, Kabir, Rabbi, Saud, and Shahed for friendly and supportive guidance and encouragement. Words cannot express how grateful I am to my mother, mother-in law, father-in-law, and extended family for keeping me in their prayers that have sustained me, thus far.

Finally, but by no means least, thanks go to my beloved wife Rany and my daughter Saniah for all the sacrifices they have made on my behalf. Without their precious love and support since the beginning it would not have been possible to complete this research study.

ix

Table of Contents

Abstract vi

Acknowledgements viii

List of Tables xv

List of Figures xvii

Chapter 1 Background 1

Introduction 1

Statement of problem 5

Purpose of study 6

Significance of the study 6

Structure of the thesis 6

Summary 7

Chapter 2 Context 8

Education in Bangladesh 8

Overview of the education system 8

Typical student progression through the formal education system 10

Current state of primary and 12

Access and gender parity 13

Learning achievement 15

Summary 19

Chapter 3 Literature Review 20

Definition of teacher effectiveness 20

Overview of teacher effectiveness research 21

Teacher characteristics 25

Teacher background 25

Teacher self-efficacy beliefs 27

Teaching practices 30

Research findings from reviews of process-product studies 31

Opportunity to learn and time-on task 32

Classroom learning environment 33

x

Structuring 37

Questioning 39

Feedback 42

Practice 44

Summary of research findings from process-product studies 47

Research findings from meta-analyses 47

Seidel & Shavelson (2007) 47

Haystead & Marzano (2009) 48

Hattie (2009) 49

Learning intentions 50

Success criteria 50

Feedback 50

Student perspective of learning 51

Metacognitive strategies 51

Criticisms of meta-analyses findings 52

Theoretical models of educational effectiveness 53

The dynamic model of educational effectiveness (DMEE) 54

Teaching factors 56

Measurement dimensions of teaching factors 58

Developmental levels of teaching skill 60

Empirical support for the DMEE 63

Methods to identify teacher effectiveness 68

Classroom observation 68

Principal evaluation 68

Student ratings of teacher behaviour 69

Teacher self-report 69

Student academic outcomes 70

Value added measures 70

Multiple measures 74

Summary and main findings from the literature review 76

xi

Conceptual framework 76

Research questions 78

Summary 78

Chapter 4 Research Methodology 79

Research approach 79

Phase one methodology 81

Research design 81

Research methods 81

Population and sample selection 81

Data collection 83

Data analysis 84

Phase two methodology 86

Research design 86

Research methods 88

Population and sample selection 88

Data collection and analysis 88

Observations 88

Interviews 95

Questionnaires 105

Summary of research procedures in this study 119

Summary 121 Chapter 5 Results and Discussion 122

Phase one results 122

Population and sample selection 122

Validity and reliability of JSC and SSC data 123

Preparation of JSC and SSC data 123

Value added analysis 124

Characteristics of twenty highest performing secondary schools 127

Phase two results 129

Population and sample selection 129

xii

Preliminary data analysis 130

Demographic characteristics of teachers 130 Self-efficacy beliefs of teachers 131

Teaching observations 132

Data collection procedures 132

Data analysis procedures 133

Determining the quantitative characteristics of teaching behaviours 134

Preliminary analysis 135

Descriptive statistics 137

Frequency distributions 138

Extent to which teaching behaviours are used in lessons 140

Contingency table analysis 141

Determining the qualitative characteristics of teaching behaviours 151

Development and application of coding sequences 151

Descriptive statistics 154 Frequency distributions 162

Modifications to developmental levels of teaching skill 168 Determining the developmental levels of teaching skill 168

Teacher developmental levels of teaching skill 170

Student questionnaires 173 Data collection procedures 173

Preliminary data analysis 174

Confirmatory factor analysis 174

Descriptive statistics 180 Teacher interviews 181

Data collection procedures 181

Data analysis procedures 182

Emerging themes from teacher interviews 182

Orientation 183

Structuring 184

xiii

Modelling 185

Practice 186

Questioning 188

Discussion of results and research questions 189

Research question one 189

Research question two 190

Research question three 196

Research question four 197

Summary 202

Chapter 6 Conclusions 203

Study purpose 203

Summary of main findings 203

Strengths and limitations of the study 206

Implications for theory and practice 207

Directions for future research 211

References 213 Appendix A: Permissions 246

A1: Request and permission to use the instruments from the authors 247

A2: Request for the guidelines for analysing student questionnaire 248

A3: Phase one UNSW HREA Panel B approval 250

A4: Phase one BISE, Dhaka approval 251

A5: Phase two UNSW HREA Panel B approval 252

A6: Phase two request letter to DSHE, Dhaka 253

A7: Phase two DSHE, Dhaka approval 254

A8: Phase two recruitment email/letter to principals 255

A9: Phase two recruitment email/letter to teachers 256

A10: Participant information statement and consent form (teachers) 257

A11: Participant information statement and consent form (parent/guardian) 261

Appendix B: Instruments 265

B1: Teacher questionnaire 266

xiv

B2: Development of teacher self-efficacy scale 268

B3: Translated version (in Bangla) of teacher questionnaire 269

B4: Teacher interview schedule 272

B5: Translated version (in Bangla) of teacher interview schedule 273

Appendix C: Phase one data 274

C1. School ranking based on value added score for schools in DMC 275

Appendix D: Phase two data 278

D1: Teacher characteristics 279

D2: Mathematical content of lessons in teacher observations 281

D3: Codes and descriptors for qualitative characteristics of teaching behaviours 282

D4: Qualitative characteristics of classroom as a learning environment 286

D5: Frequency distributions for quantitative and qualitative characteristics of 289 teaching behaviours D6: Fisher’s exact test, phi and Cramer’s V and cell frequencies 300

D7: Confirmatory factor analysis results 304

D8: Modified developmental levels of teaching skills and checklist of criteria 312

D9: Teacher interview results 316

xv

List of Tables

2.1 Overview of the current education system in Bangladesh 8

2.2 Expansion of primary schooling, 1990-2014 13

2.3 Expansion of secondary schooling, 1990-2014 14

2.4 Primary and secondary completion, dropout and survival rates, 2014 15

2.5 National examination pass rates, 1996-2014 16

3.1 Summary of research findings from the reviews of process-product studies 32

3.2 Developmental levels of teaching skill 61

5.1 Individual subject grade, score range, and grade point average 124

5.2 Twenty highest performing secondary schools in mathematics within DMC 126 based on value-added scores (JSC, 2010 - SSC, 2013). 5.3 Descriptive statistics of the 380 secondary schools and the 20 highest performing 127 secondary schools in mathematics within DMC 5.4 Characteristics of twenty highest performing secondary schools in mathematics 128 within DMC 5.5 T eacher demographic characteristics (n = 15) 130

5.6 Descriptive statistics for teacher self-efficacy beliefs 131

5.7 Number of teaching tasks (counts) 135

5.8 Duration of teaching tasks (in minutes) 136

5.9 Descriptive statistics for number and duration of teaching tasks 137

5.10 Fisher’s exact p-value and phi statistics for number and duration of teaching 143 tasks (n=15) 5.11 Cell frequencies for number and duration of questioning tasks 143

5.12 Cell frequencies for number and duration of CE(I)tasks 144

5.13 Fisher’s exact p-value and Cramer’s V statistics for number and duration of tasks 145 and mathematical content of lesson 5.14 Cell frequencies for mathematical content and number of structuring tasks 146

5.15 Cell frequencies for mathematical content and number of questioning tasks 147

5.16 Cell frequencies for mathematical content and duration of questioning tasks 147

xvi

5.17 Cell frequencies for mathematical content and duration of orientation tasks 148

5.18 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration 149 of teaching tasks among teachers 5.19 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration 150 of teaching tasks among schools 5.20 Stage, focus and differentiation dimensions of orientation, structuring, 155 modelling, practice and questioning behaviours 5.21 Quality dimension of orientation, structuring, modelling, practice and 156 questioning behaviours 5.22 Stage, focus and quality dimensions of classroom as a learning environment 157 (teacher initiated) 5.23 Descriptive statistics for stage and focus dimensions of orientation, structuring, 158 modelling, practice and questioning 5.24 Descriptive statistics for quality dimension of orientation, structuring, modelling, 159 practice and questioning 5.25 Descriptive statistics for stage, focus and quality dimensions of classroom as a 161 learning environment 5.26 Factor-item specification for hypothesised and one-factor CFA models of 175 teaching behaviours 5.27 Fit indices for one-factor CFA models of teaching behaviours 176

5.28 Final fit indices for one-factor CFA models of teaching behaviours 178

5.29 Fit indices for five factor CFA model of teaching behaviours 178

5.30 Descriptive statistics, correlations and Cronbach alpha of latent teaching factors 180

5.31 Observed and student rated teaching behaviours 195

xvii

List of Figures

2.1 Location of eight education boards (general stream) in Bangladesh 11

3.1. Input-process-product framework 22

3.2 Dynamic model of educational effectiveness (DMEE) 55

3.3 Conceptual framework 77

4.1 Flow chart of research processes in the study 120

5.1 Percentage of teachers displaying the mean or above teaching behaviours 140

5.2 Teaching tasks as a percentage of teaching time 141

5.3 Teachers developmental levels of teaching skill 172

1

Chapter 1 Background

Introduction

Bangladesh is one of the world’s least developed countries with many problems affecting its social, political and economic make-up (Rahman, Hamzah, Meerah &

Rahman, 2010). Most of the population live in extreme poverty and only a few have access to goods and services provided by government (Rahman et al., 2010). Hence, poverty reduction is a high priority for the government in Bangladesh. Most politicians have recognised that the country is endowed with limited natural resources and an abundance of human resources (Bangladesh’s population is currently 160.32 million,

[BBS, 2016]), and see education as critical to poverty reduction, economic progress and national prosperity (Andaleeb, 2007; Ministry of Education [MoE], 2004, 2016). Since

1990, successive governments have made concerted efforts to fulfil constitutional obligations and international commitments to achieve the Education for All (EFA) and

Millennium Development Goals (MDGs). For example, the government of Bangladesh has implemented several policy initiatives, including the Compulsory Primary

Education Act 1990 and the Education for All (EFA) National Plan of Action (1992-

2000, 2003-2015) to achieve these goals. More recently, the government has become committed to achievement by the year 2030 of the 17 Sustainable Development Goals

(SDGs) which follow on from the MDGs. Interestingly, SDG 4 is focused on education,

“[to] ensure inclusive and equitable quality of education and promote life-long learning opportunities for all” (General Economics Division [GED],2015, p. 114). Furthermore, the current government has outlined its commitment to the people of Bangladesh, in the

‘Vision 2021’ document where they have pledged to combat poverty, build a ‘digital

Bangladesh’ and move the country from low income to a middle-income country by the year 2012 by focusing on improving education as a catalyst for change (Ahmed,

2

Hossain, Kalam & Ahmed, 2013). As an indicator of the level of government commitment to improving education, the National (2010) has been updated, and for the first time in Bangladesh, the government is implementing a five- year plan (2016-2020), which prioritises strategies for improving education including, skill development and emphasises achievement of SDG 4 (GED, 2015).

Bangladesh has made significant progress with regard to access to education such as more schools and teachers, curriculum revision and increased enrolment rates especially for girls in secondary education through various government and non- government initiatives (Ahmed, 2013, Rahman et al., 2010). For example, the net enrolment rates in increased more than10% in the last decade (2005-

2014) (Bangladesh Bureau of Educational Information and Statistics [BANBEIS],

2014). Not surprisingly, during 2004-2010, there was also an increase of 7% in the net enrolment rate of secondary education (BANBEIS, 2014; UNESCO, 2007). However, a number of reports (Ahmed, Nath, Hossain & Kalam, 2006; Huq & Rahman, 2008;

UNESCO, 2012) have attributed this to the increased enrolment of girls as a result of government initiatives for example, the provision of stipends for girls designed to stimulate the number of girls enrolled in secondary education.

Despite these improvements, other measures have suggested progress in improving the quality of education has been far from impressive. For instance, in 2010 about 17% of the poor did not access education (Directorate of Primary Education

[DPE], 2012), low attendance rates (41%) were reported in secondary schools (Nath et al., 2007), dropout rates averaged 42% from grades 6 to 10 and a low 58% of students completed grade 10 secondary education (BANBEIS, 2014) which has suggested serious deficiencies in the quality of education. In addition, poor student performance in year eight (Junior School Certificate) and year ten ( Certificate) public

3

examinations has been attributed to consistently high failure rates in English and

Mathematics. For example, a study conducted by Nath et al. (2007) found that 7.5% of year 10 students were able to get half the items correct when scores for compulsory courses of Bangla, English, Mathematics and Science were combined. Students performed worst in mathematics, 16.4% were able to get half the items correct and this was followed closely by English with 26.8% able to get at least half the items correct. A more recent study (MoE, 2016) showed that 44% of grade eight students acquired nationally defined basic learning competencies in Bangla and a low 35% achieved this in mathematics. While the comparison of study results would suggest that some improvement has occurred, it seems Nath and colleagues (2007) observation, that most students’ find mathematics difficult was still relevant five years later.

Additionally, Nath et al. (2007) reported widespread differences in year 10 student pass rates in public examinations (i.e., the Junior School Certificate and

Secondary School Certificate) between government, non-government and religious schools in urban and rural locations. Generally, there was a hierarchy of quality among different types of schools. For example, the pass rates were 68% for students who attended urban government schools, 50% for urban non-government, 24.5% for rural non-government and 18.8% for urban (religious) schools and 7.8% for rural madrasas schools suggesting that not all students receive a quality education (Nath et al., 2007).

Clearly, the Bangladesh government’s emphasis on poverty reduction has been effective with respect to access and participation in education. However, maintaining quality is equally important, “since access to education without a guarantee of a minimum level of quality is meaningless” (Ahmed et al., 2006, p.5). Rahman et al.

(2010) have suggested that the issues of access, equity and quality should be integrated

4

and dealt with simultaneously. Nevertheless, finding ways to combine growth with quality remains a major challenge for policymakers (Ahmed et al., 2006).

Several studies (Ahmed et al., 2006; Nath, Chowdhury, Ahmed & Choudhury,

2014; Nath et al., 2007) have attributed poor student performance and low quality education to poor instructional practices of teachers, inadequate facilities and learning materials, poor enforcement of rules and criteria for government assistance, inadequate resourcing and poor management of schools.

A number of government initiatives and educational reforms have been implemented with international assistance with the goal of addressing factors identified as contributing to poor student performance and low-quality education. One initiative, the Teaching Quality Improvement in Secondary Education Project (TQI-SEP) (source: http://www.moedu.gov.bd/old/moe_dshe_TQISEP.html) was implemented in 2005-

2011 to improve the quality of teaching in secondary education. This project has focused on improving teacher training, education policy and administrative management and education facilities. The outcome of this project has been to increase the number of qualified teachers in terms of background characteristics such as certification, a minimum one-year bachelor’s degree and demonstrated subject matter competence.

Further, the project funders have attributed improved student pass rates in the

Secondary School Certificate (SSC) examination from 52.5% at the start of the project in 2005 to 67.4 % in 2009 and, to 82.1% in 2011 to the positive impact of training teachers (CIDA, 2012).

A meta-analysis conducted by Hattie (2003, 2009) has shown that teachers account to a large extent for student learning and achievement gains. Nevertheless,

Hattie (2012) has cautioned “it would be easy to say that it is teachers who make the difference…rather there are some teachers doing some things that make the difference”

5

(pp. 22-23). In other words, teachers differ substantially in their impact on student learning. For example, students’ in classrooms with highly effective teachers are likely to have almost a year’s learning advantage over students’ in classrooms with less effective teachers. The difference between highly effective and less effective teachers is

“related to attitudes and expectations teachers have when they decide on what to teach, the level of difficulty, progress and effects of their teaching” (Hattie, 2012, p. 23).

Hence, although the TQI-SEP has increased the likelihood that students are taught by trained teachers, the improvement in teacher background factors such as training is unlikely to ensure that all students are being taught by teachers who are effective in promoting student learning and achievement (Palardy & Rumberger, 2008).

Statement of the problem

There is widespread concern among policymakers, educators and the public in

Bangladesh about the quality of education. This has been fuelled by low levels of student academic performance in examinations of key learning areas such as, mathematics, science, English and Bangla. A large body of research (Hattie, 2003,

2009) has shown that teachers are key contributors to school and student academic performance, and more specifically, Hattie (2012) argued evidence suggests that it is some teachers with certain attitudes and expectations that contribute the most to student academic performance. However, most researchers have focused on investigations of teacher and student characteristics (Ahmed et al, 2006; Asadullah, 2008; Huq &

Rahman, 2008; Uddin, 2007), and curriculum (Ahmed et al., 2006; Nath et al., 2007;

Shekh, 2005) in Bangladesh. Teaching practices have not been the subject of systematic investigation, and consequently, very little is known about what teachers do to facilitate student learning in this country. It is contended given the paucity of studies, any investigation of teaching practices would provide important insights that potentially

6

would lead to improvement in the quality of education in Bangladesh.

Purpose of the study

The purpose of this study is twofold: first, identify highly performing secondary schools in mathematics within Dhaka Metropolitan (DMC), Bangladesh, and second to investigate the teaching behaviours of mathematics teachers in the twenty highest performing secondary schools within Dhaka, Bangladesh.

Significance of the study

This is potentially important study. It is one of the first studies to investigate systematically and rigorously the teaching practices of mathematics teachers in secondary schools in Bangladesh. The findings are expected to provide policymakers with insights into teaching practices, and may be used to develop future teacher training initiatives for mathematics teachers. Further, it is hoped the findings will facilitate improved teaching practices and student mathematics achievement in Bangladesh. The results from the study will add to the knowledge of effective teaching practices with insights gained by exploring teaching practices in a non-western context, Bangladesh, where teaching practices have largely been under-investigated.

Structure of the thesis

The overall structure of the thesis takes the form of six chapters. The first chapter has introduced the context, the problem, purpose and significance of the study. The second chapter contextualises the research by providing information on the education system in Bangladesh. The third chapter reviews relevant literature underlying the conceptual framework developed to guide the study and poses the research questions.

The fourth chapter is concerned with the research methodology employed in the study.

The fifth chapter reports and discusses the results with reference to research questions.

The final chapter summarises the key findings to draw conclusions and includes a

7

discussion of study limitations and implications for theory, practice and future research.

Summary

In summary, this chapter has provided the background on the main motivation for the study including its purpose and explained why the study is significant. The next chapter will provide relevant contextual information about the education system in

Bangladesh.

8

Chapter 2 Context

This chapter provides background information on the context of the study. First, an overview of the education system in Bangladesh is given, and this is followed by a more in-depth and detailed description of the characteristics, and discussion of the state of primary and secondary education in Bangladesh.

Education in Bangladesh

Overview of the education system.

Table 2.1 shows the formal education system in Bangladesh.

Table 2.1

Overview of the current education system in Bangladesh Sub Stream Stage Grade Institute Age System (years) General Primary 1-5 primary school Primary 6-10 Education Madrasah Primary 1-5 ebtedayee madrasah Junior 6-8 j unior secondary secondary school, Secondary high school, Senior 9-10 school section of secondary intermediate college General intermediate college, Higher secondary 11-12 intermediate section Secondary of tertiary institutes Education 11-17 Secondary 6-10 dakhil madrasah Madrasah Higher secondary 11-12 alim madrasah Junior secondary 6-8 junior secondary school, high school

Vocational Senior Secondary 9-10 technical & polytechnic Higher secondary 11-12 institutes Bachelor(Pass) 13-15 college, university, General Bachelor(Hons.) 13-16 professional Masters 16-17 institutions

Tertiary Madrasah Fazil 13-14 fazil & kamil

Education Kamil 14-15 madrasah + Diploma 11-14/ polytechnic institutes 18 Vocational 13-14 Degree 13-16/ engineering colleges 15-16 Source: www.moe.gov.bd

9

Table 2.1 shows the existing formal education system in Bangladesh consists of primary education (five years), secondary education (seven years), and . Primary education is confined to grades one to five (aged 6-10). Secondary education includes grades six to twelve and caters to young people aged 11-17 years. It consists of three stages, junior secondary (three years), senior secondary (two years) and higher secondary (two years). The higher secondary stage is followed by tertiary education (Rahman et al., 2010; UNESCO, 2007). Primary education was made free and compulsory for children by the Compulsory Primary Education Act of 1990

(UNESCO, 2007).

As a result of recent changes made to the National Education Policy (2010), the existing formal education system (see Table 2.1) is to be restructured by 2018 (MoE,

2016). From the year 2018, the formal education system in Bangladesh will consist of compulsory primary education (grade one to eight), secondary education (grade nine to twelve), and , where a 4-year Honours degree will be followed by degrees at the postgraduate level such as Master’s, MPhil or Ph.D. (MoE, 2016). It should be noted that the present study has been conducted within the existing formal education structure shown in Table 2.1.

The institutes (or schools) in the three subsystems (primary, secondary and tertiary) consist of public and private institutes. The public (government) institutes are managed and fully funded by the Bangladesh government. The private (non- government) institutes are managed independently, however, may either be funded by government subsidy or independently sources.

Two government ministries are responsible for the education system in

Bangladesh, the Ministry of Primary and Mass Education (MoPME) is responsible for primary education and the Ministry of Education (MoE) is responsible for secondary

10

and tertiary education.

Typical student progression through the formal education system.

A student may enter the education system at age 3-5 years at the pre-primary level and attend a private school or for up to two years or attend a government primary school for six months before commencing her/his primary education.

A student aged 6-10 years is enrolled in five years of compulsory primary education at either a government registered non-government or a fully independent primary institute (or school).

In metropolitan cities, government and non-government primary schools cater to the educational needs only of the poorer section of the community as the better off families usually send their children to the kindergarten (privately managed) /primary section of secondary schools. In addition, a substantial number of non-government organisations (NGO) operate primary schools which cater for the dropouts from government and non-government primary schools.

At the end of primary school (or grade 5), the learning achievement of students from the general and madrasah streams is assessed at a national public examination known as the Primary School Certificate (PSC) examination specifically ‘Prathomik

Shikkha Somaponi Porikkha’ (in general stream) and ‘Edtedayee Shikkha Somaponi

Porikkha’ (in madrasah stream). A student who successfully passes the PSC is eligible to continue her/his education at the secondary level.

Secondary education includes two stages (as shown in Table 2.1), the secondary stage (or grades 6-10) and higher secondary stage (or grades 11-12). The secondary stage is further divided into the junior secondary (or grades 6-8) and senior secondary

(or grades 9-10). At the end of the junior secondary stage (or grade 8), the learning achievement of a student is assessed at a public examination, known as the Junior

11

School Certificate (JSC) in the general stream, and the Junior Dakhil Certificate (JDC) in madrasah stream.

A student who passes the JSC or JDC may proceed to the senior secondary stage

(grades 9-10) and be enrolled in a general, madrasah or vocational stream school (see

Table 2.1). In the general stream, the students select to follow a curriculum in either the humanities, science or business disciplines. In the madrasah stream, the students select between general, science, mujaddid and hifjulquaran (both ‘mujaddid’ and

‘hifjulquaran’ emphasise Islamic curriculum). In secondary , there is no sub-division and two years of the certificate program is offered. At the end of the senior secondary stage (or grade 10) learning achievement is assessed at the public examination, known as the Secondary School Certificate (SSC), in the general stream, the Dakhil in the madrasah stream, and the SSC Vocational in the vocational stream.

Figure 2.1 Location of eight education boards (general stream) in Bangladesh

Education boards have been appointed by the government to administer and conduct all public examinations such as the JSC for students’ attending general stream

12

schools in eight education regions (see Figure 2.1). One education board oversees public examinations conducted in the madrasah stream and one education board looks after public examinations in the vocational stream schools.

In addition, to public examinations student learning achievement is assessed annually by schools. This assessment is organized, conducted and controlled by schools according to instructions of the relevant education board.

Students who succeed in passing the SSC examination may continue to the higher secondary stage attending either an intermediate college in the general or madrasah stream or a technical/polytechnic institute in the vocational stream. At the completion of two years of higher secondary education, student learning achievement is assessed at the Higher Secondary Certificate (HSC) examination, and if successful, a student may proceed to tertiary education attending a public or private university, technical college or specialised institution (see Table 2.1).

Current state of primary and secondary education.

As noted in Chapter 1, Bangladesh is one of the world’s poorest countries with approximately 40% of the population below the poverty line (General Economics

Division [GED], 2009). Poverty reduction remains a priority for the government and education has been recognised as critical to poverty reduction, economic progress, and prosperity. Article 17 of the Constitution enshrines the right of every citizen to free universal basic education. In the context of constitutional obligation, successive governments have passed various laws and acts regarding primary education, for example, the Compulsory Primary Education (CPE) Act, 1990. In addition to these national imperatives, Bangladesh is committed to attaining goals in education as set out in various international declarations. For example, Bangladesh is a signatory to the

Jomtein Conference, 1990, the Dakar Conference, 2000 and is committed to achieving

13

17 sustainable development goals (SDGs). The goals which specifically relate to education are SDG 4, ‘to ensure inclusive and equitable quality education and promote life-long learning opportunities for all’, and SDG 5, ‘to achieve gender equality and empower all women and girls by 2030’. The government and international agencies have made significant steps forward toward achieving these goals.

Access and gender parity.

Over the last two decades, remarkable progress has been made with respect to access (MDG 2) and gender parity (MDG 3) in primary and secondary education. Table

2.2 shows 37, 473 new non-government primary schools were established, and 282,824 primary teachers were appointed from 1990-2014. From Table 2.2 it can be seen that this was accompanied by a corresponding 38% increase in the net enrolment rate (NER) suggesting significant improvement in access to education. As well, Table 2.2 shows the

NER for girls was higher at 48% during this time period suggesting considerable progress has been made with regard to gender parity.

Table 2.2

Expansion of primary schooling, 1990-2014 Year School (number) Teachers Students NER (number Non- Total Trained Total Boys Girls Total Govt. millions) Govt. (number) (%) (%) (%) (%) Data not 1990 45,783 37,760 8,023 200,060 12.34 60 69 51 available 2014 108,537 63,041 45,496 482,884 48% 19.55 98 97 99 Source: DPE (2012), GED (2009), BANBEIS (2014). Notes. NER refers to net enrolment rate, Govt. refers to government, and Non-Govt. refers to non- government. The NER figures are rounded to the nearest whole number.

The rapid increase in primary enrollment resulted in a significant increase in the number of students continuing to secondary education. Table 2.3 indicates student numbers in secondary schools increased from 2.99 million to 9.16 million from 1990-

2014, secondary 9,236 new secondary schools were opened, and the number of teachers

14

also increased. Not surprisingly, the net enrolment rate also increased by 28% during

1990-2014. Table 2.3 suggests gains in gender parity are consistent in secondary schools (see Table 2.2). However, Table 2.3 similar patterns in gender parity (shown in

Table 2.2) in primary education are reflected in the increased NER of girls (55%) in secondary education.

Table 2.3

Expansion of secondary schooling, 1990-2014 Year School (number) Teachers Students NER Non- Total Trained (number Total Boys Girls Total Govt. Govt. (number) (%) millions) (%) (%) (%) Data not 1990 10,448 295 10,153 122,896 2.99 22 28 15 available 2014 19,684 327 19,357 232,994 62 9.16 50 46 55 Source: BANBEIS (2014). Notes. NER refers to net enrolment rate. The NER figures are rounded to the nearest whole number.

Despite impressive progress being made with regard to access and gender parity in primary and secondary education, not all primary school aged children participate or are able to attend primary school, and less than 40% of eligible primary graduates have access to or attend secondary schools. Poverty is the predominant reason for non- attendance or non-access in primary and secondary education in Bangladesh (Ahmed et. al., 2005). For example, a study conducted by Filmer & Prichett (1999) reported that a child from a wealthy family is more likely to have completed grade 1, than a child from a poor family, and this gap increases in the secondary stage of education to the point where a child from a wealthy family is 6 times more likely to have completed grade 9, compared to a child from a poor family.

In addition, Table 2.4 and the net enrolment rates (see Table 2.2 and Table 2.3) suggest that for primary school aged (6-10) children, for every 100, 95 students will begin their education in grade 1, 57 students will graduate from primary school, only 24

15

of these primary graduates will commence in the initial stage of secondary education

(grade 6), and 10 will successfully complete the secondary stage of education (grade

10). Further, this is reflected in labour force estimates which showed approximately 5% of the workforce with secondary education qualifications (BBS, 2002).

Table 2.4

Primary and secondary completion, dropout and survival rates, 2014 Primary Secondary Indicator Boys Total Girls Total (%) Girls (%) Boys (%) (%) (%) (%) Completion rate1 79.1 75.7 82.5 58.06 65.98 51.62

Dropout rate 20.9 24.3 17.5 41.94 34.02 48.38 Survival rate2 81.0 77.6 84.4 63.83 72.06 57.10 Source: DPE (2014), BANBEIS (2014) Notes. 1 Completion rate is the number of students who passed the terminal/graduation examination of one level (for example, Primary Terminal Examination of Grade 5) as a percentage of children of that education level graduation age (for example, age 10); 2 Survival rate is the percentage of a cohort of students in initial grade (for example, grade 1 in primary level) reaches the end grade of the level (for example, grade 5 in primary level) regardless of repetition.

According to the indicators in Table 2.4 and the recent report from the General

Economics Division of Planning Commission of Bangladesh (i.e., Millennium

Development Goals Bangladesh Progress Report, 2015), Bangladesh was not able to attain the goal of universal completion of primary schooling by all citizens stipulated as a Millennium Development Goal for 2015.Clearly, this is of major concern to the government, policymakers, and educators in Bangladesh, and there is urgent to need to investigate the reasons for low completion and high student dropout rates in primary schools.

Learning achievement.

Progress has also occurred in student learning achievement as measured by pass rates in national examinations. Table 2.5 shows an upward trend in the pass rates for the

16

Primary Scholarship Examination (an increase from 12% to 74% during the period

1996-2008) and Primary Terminal Examination (PTE) (an increase from 92% to 97% during the period 2010-2014). Further, an upward trend in the pass rates in the Junior

School Certificate (SSC) from 71% to 89% during the period from 2010-2014 and an increase of 42% to 92% during the period 1996-2014 in the Secondary School

Certificate (SSC) examination is also evident (see Table 2.5).

Table 2.5

National examinations pass rates, 1996-2014 Examination Year Pass rate (%) Primary (grade 1-5) education 1996 12.10 2000 26.34 Primary Scholarship Examination (PSE)/ 2008 74.03 Primary Terminal Examination (PTE) 2010 92.34 2012 97.35 2014 97.93 Secondary (grade 6-10) education 2010 71.34 Junior School Certificate (JSC) 2012 86.11 2014 89.85 1996 42.61 2000 41.58 Secondary School Certificate (SSC) 2012 86.37 2014 92.67 Source: DPE (2014a), BANBEIS (2014) Notes. Until 2008, every year there was a provision of PSE which was conducted centrally from DPE and a proportion of best students in each school can attend the examination at the end of grade 5. In 2009, PTE replaced the PSE. This compulsory nationwide examination is held annually to certify that a child has successfully completed the primary education (grade 5). The JSC examination was introduced countrywide in 2010. Therefore, data have been presented from 2010 to 2012. Grade 8 students must pass the examination to continue to grade 9.

Although Table 2.5 has suggested improvement in examination pass rates, a number of other government reports have indicated student learning achievement is poor. For example, in February 2005, newly admitted grade 6 students of all secondary schools were required by the Ministry of Education to sit an examination which

17

assessed knowledge, skills and competencies acquired during the primary years.

Specifically, the examination assessed five areas, Bangla, English, Mathematics,

Environmental studies and Religious studies. According to Nath (2006), the average pass rate was 50% and there were large variations in student performance between districts, schools, and streams.

Similarly, earlier independent studies have reported weak learning achievement among primary school students. For example, Chowdhury, Choudhury, Nath, Ahmed &

Alam (2000) conducted a study of 27 primary school terminal competencies and concluded that less than 2% of students had attained the 27 competencies on completion of primary school. As well, a study conducted by Ahmed, Nath & Ahmed (2003) on literacy assessment reported about 36% of students remain non-literate or semi-literate even after completing five years of primary school. Further, in 2008 the learning achievements of 30,000 Grade 3 and Grade 5 pupils from 720 schools were assessed in the National Student Assessment Survey. This examination assessed achievement of learning outcomes in Bangla and mathematics for Grade 3 and Bangla, mathematics,

English, science and social studies for Grade 5. Learning achievement was defined by achieving a score of 50% or more of the total marks and ‘mastery of the subject’ was defined by three levels: mastery (scored 80% and above), partial mastery (scored 50-

79%), and non-mastery (scored up to 49%). Although, most students were classified as

‘achieving’ (average score was above 50%) across subjects, mastery in reported to be weak. For example, 1.7% of grade 3 students achieved mastery in Bangla, and 1% achieved mastery in mathematics. The figures for grade 5 students were 13.7% mastery in Bangla and 3.1% mastery in mathematics. Also, variations in student performance were evident across rural and urban districts and between streams consistent with the

Ministry of Primary & Mass Education (2006) findings.

18

These government and independent studies provide compelling, contrary evidence to the indicators presented in Table 2.5 suggesting that the level of primary school student achievement is low, and this has been acknowledged policy makers (Directorate of Primary Education [DPE], 2012).

The low levels of learning achievement of primary school students may be associated with the increased failure rates in secondary education public examinations.

For example, Table 2.5 shows failure rates were less than 3% at primary level, increased to 11% at the junior secondary level, and fell to 8 % at senior secondary level in 2014.

Investigations have attributed higher failure rates at the secondary level to poor performance in English and Mathematics (Secondary Education Quality and Access

Enhancement Project [SEQAEP], 2011).

There is a paucity of in-depth research on student learning achievement at the secondary level in Bangladesh. The exception is a study conducted by Nath et al. (2007) who investigated student performance in specific subjects. The results indicated student performance in mathematics was lower than performance in Bangla, English, and science. Additionally, the authors reported widespread differences in student examination performance between urban and rural schools, government, non- government and religious schools (see Chapter 1).

At least four government and international agency reports (Asian Development

Bank [ADB] 2002, 2004; Centre for Policy Dialogue [CPD] 2001; UNICEF, 2009) have attributed poor student achievement to deficient teacher skills and the quality of the teaching-learning process. In particular, at the secondary school level, low pass rates in English and mathematics have been linked to poor teaching (CPD, 2001). These findings are consistent with recent research evidence (Hattie, 2003, 2009) which has suggested approximately 30% of the variation in student learning and achievement

19

gains can be attributed to teachers. However, research which investigates teachers and classroom teaching practices are very rare in Bangladesh with the exception of two studies. A study by Shekh (2005) of primary school teacher practices reported teachers used a teacher-centered approach and there was very little student-teacher interaction in classrooms. Another study conducted by Lutfuzzaman, Muhammad & Hasan (2006) investigated secondary mathematics teaching and noted the following:

‘Most of the teacher’s lack adequate mathematics content knowledge as they did not study mathematics at the graduation and post-graduation level. They taught with deficient preparation, were unwilling to adjust teaching techniques to varying abilities of different students, and they did not allow students to solve problems in a different way or through other methods than those followed by teachers’ (p. 31).

Generally, significant progress with respect to access and equity in education has occurred in Bangladesh. However, a distinct challenge for policymakers is to raise the levels of student learning achievement. While there is a pressing need to investigate all teaching practices, the previous discussion has emphasised a more urgent need to investigate teaching practices of mathematics teachers due to relatively low levels of student performance.

Summary

This chapter has contextualised the study. The main characteristics of the education system, key concerns of policymakers, educators and the public with regard to the quality of education in Bangladesh were outlined and emphasised the need to investigate teaching practices. The next chapter will review the literature on teacher effectiveness, and based on this review the conceptual framework will be developed and the research questions posed for the study.

20

Chapter 3 Literature Review

This chapter provides a systematic review of the literature on teacher effectiveness. It consists of three sections. The first section provides a definition of teacher effectiveness for the study and a brief overview of the teacher effectiveness literature. In the next section, the three strands of the teacher effectiveness literature are reviewed: teacher characteristics, teaching practices and measurement methods of teacher effectiveness. The final section describes the conceptual framework and presents the research questions investigated in the study.

Definition of teacher effectiveness

The literature on teacher effectiveness is extensive and diverse. This is reflected in the number of definitions which seem to depend on a host of variables such as, who is defining the term, who the learners are, the subject matter, and the methods of investigation. Despite these differences, most researchers agree that the critical criterion for determining teacher effectiveness is student learning outcomes (Barry, 2010;

Reynolds, 1995). However, the effects of teaching on student learning are diverse and can be differentiated into three broad domains: affective, psychomotor, and cognitive outcomes (Guskey, 2013; Sammons, DeLaMatre & Mujtaba, 2002; Seidel & Shavelson,

2007). Affective outcomes (Krathwohl, Bloom & Masia, 1964) refer to the social, emotional and attitudinal aspects of learning. Psychomotor outcomes (Simpson, 1966) refer to specific skills or behaviours in certain technical fields (e.g., ).

Cognitive or academic outcomes (Bloom, Englehart, Furst, Hill & Krathwohl, 1956) refer to gains in academic achievement measured either by standardised tests or teacher developed, specific tests (Guskey, 2013; Hunt, 2009). Cognitive outcomes provide the foundation of a school’s academic curriculum, and may vary across subject areas and span a broad range of subdomains in each subject area (Guskey, 2013). For example,

21

cognitive goals or academic achievement in science may be distinct from the cognitive goals set for language arts. Again, the cognitive goal for mathematics may be divided into its subdomains of operations and algebraic thinking, number and operations, measurement and data, geometry, and mathematical practices (CCSSO & NGA, 2010).

Further, the cognitive or academic outcome in each subdomain may also vary for the level of intellectual complexity from simple goals that require recall of factual information to more complex goals that require sophisticated analysis or inference

(Guskey, 2013).

Thus, most studies of teacher effectiveness have focused on academic outcomes

(Guskey, 2013; Sammons et al., 2002; Seidel & Shavelson, 2007) and examined the correlation between teachers’ characteristics, and teaching practices (Goe, 2007; Wayne

& Youngs, 2003). Adopting a cognitive perspective, Groschener, Seidel & Shavelson

(2013) distinguished between teacher effects, “which refer to teacher characteristics and qualifications as predictors of teaching and student academic achievement”, and teaching effectiveness, “which refers to the effects of teaching on student learning” (p.

240). Based on this distinction, Groschener et al. (2013) contended the relationship between teacher effects (teacher characteristics and qualifications) and student academic achievement is mediated by teaching practices. Given these distinctions, teacher effectiveness is broadly defined in this study, as teacher characteristics and teaching practices that contribute to student academic achievement.

Overview of teacher effectiveness research

Over the last forty years, as new insight has been gained and successive researchers have endeavoured to overcome the weaknesses of preceding investigative approaches the concept of effective teaching has gradually been broadened. In the early

1960s, researchers examined the direct link between inputs e.g., teacher personality and

22

outputs, such as academic achievement ignoring the process variables (i.e., teaching practices) to explain differences in student performance. For example, Coleman et al.

(1966) conducted a study in the United States which examined the influence of school resources on student achievement and found differences in student achievement

(measured by standardised tests) was greater within schools than between schools. They attributed these differences to student background and concluded that student background and socioeconomic status had more impact on student outcomes than school and teachers. Such studies have been criticised for not focusing on what actually happens in the classroom (Getzels & Jackson, 1963; Kyriacou, 2009; Muijs &

Reynolds, 2005), and have been referred to as ‘black-box’ research (Good, 2010).

By the late 1960s, researchers shifted investigations to focus on teacher classroom activities and student academic achievement (Freiberg & Driscoll, 2000; Kauchak &

Eggen, 1998; Killen, 2013; Kyriacou, 2009). Since then, most researchers (e.g., Brophy

& Good, 1986; Croll, 1996; Emmer, Evertson & Anderson, 1980; Good, Grouws &

Ebmeier, 1983; Mortimore, Sammons, Stoll, Lewis & Ecob, 1988; Pressley,1999;

Reynolds & Muijs, 1999; Stronge, 2002) have focused on investigating the relationship between teaching practices and student academic achievement using the input-process- product framework shown in Figure 3.1. This framework was based on the hypothesis that classroom teaching practices influenced student academic achievement (Seidel &

Shavelson, 2007).

INPUT PROCESS PRODUCT Teacher Classroom Student Academic Characterstics Teaching Practices Achievement

Figure 3.1. Input-process-product framework

23

In the input-process-product framework shown in Figure 3.1, the inputs are teacher characteristics including, teacher background characteristics such as qualifications and experience, teacher knowledge and self-efficacy beliefs. The processes are classroom teaching practices, and student academic achievement (most often measured by student performance on standardised tests) represents the ‘product’.

Studies (e.g., Anderson, Evertson & Brophy, 1979; Good & Grouws, 1979a. 1979b;

Mortimore et al., 1988; Stallings, Almy, Resnick & Leinhardt, 1975) based on the input-process-product model have investigated the relationships between teacher characteristics, the actions, and practices of teachers and student achievement. In the

1970s and 1980s, using the input-process-product model scholars who investigated effective teaching accumulated substantial evidence to support the contention that certain teaching behaviours contributed to student achievement (Groschener et al., 2013;

Hill, Rowan & Ball, 2005). For example, student achievement was reported to be associated with teaching styles (Bennett, 1976; Flanders, 1970; Galton & Simon, 1980), time-on-task (Berliner, Fisher, Filby & Marliave, 1978; Croll & Moses, 1988; Smith,

2000), proactive classroom management (Kounin, 1970), structuring lessons

(Mortimore et al., 1988), questioning (Cole & Chan, 1987), and lesson reflection

(Korthagen & Kessels, 1999; Schon, 1987).

In the 1990s, constructivist views of learning were adopted. Although this concept has roots in classical antiquity, going back to Socrates dialogues with his followers, new perspectives have been added by philosophers, psychologists, educators, and others, such as Piaget, Dewey, Vygotsky, Bruner, Ausubel, Glasersfeld, and Morin. Hence, the term has been interpreted in different ways as a theory of learning (Woolfolk, Winne &

Perry, 2009). However, most constructivist theory can be divided into two forms: psychological/ (individual) constructivism (Piaget, 2001, Phillips, 1997), and social

24

constructivism (Mason, 2007; Vygotsky, 1978). In psychological constructivism (also known as ‘first wave constructivism or solo constructivism’), new knowledge is constructed through the transformation, organisation, and reorganisation of previous knowledge and experience (Woolfolk et al., 2009). In social constructivism (known as

“second wave constructivism), new knowledge is formed from students’ previous knowledge and/or experience and social interaction (Woolfolk et al., 2009). In both forms, constructivism is based on the premise that a student actively constructs her/his own knowledge rather than passively receiving it from a teacher (Askew, Brown,

Rhodes, Johnson & William 1997; Muijs & Reynolds, 2005; Zimmerman, 2002).

Therefore, according to this perspective, the teacher is seen as a facilitator of learning rather than a transmitter of information (Dart, 1994), and is required to provide experiences to actively engage students in activities that promote higher-order learning

(Bransford & Donovan, 2005; Groschener et al., 2013; Seidel & Shavelson, 2007).

Constructivism can be applied in teaching through specific strategies that may include inquiry learning (Dewey, 1933; Dowden, 2007), problem-based learning

(Hmelo-Silver, 2004), scaffolding (Hannafin, Hill & Land, 1997), coaching (Costa &

Garmston, 1994), and reflection (Duffy & Jonassen, 1992). Applying a constructivist approach to the teaching and learning process, encourages a student to construct meaning about a specific task through self-regulated learning comprising three areas; cognition, meta-cognition (thinking about thinking), and motivation/affect (e.g., self- efficacy beliefs) (Muijs et al, 2014).

More recently scholars (Hattie, 2009; Haystead & Marzano, 2009; Kyriakides,

Christoforou & Charalambous, 2013a; Seidel & Shavelson, 2007) have utilised meta- analysis. Meta-analysis combines findings from studies of teacher effectiveness to estimate effect sizes (Muijs et al., 2014). The findings from various meta-analyses are

25

important because they have affirmed previous research findings and have added new insights to the field of teacher effectiveness (Muijs et al., 2014).

Additionally, scholars have posited a number of theoretical models of teacher effectiveness (Creemers & Kyriakides, 2008; Creemers, 1994, Scheerens, 1992). The models have integrated results from school effectiveness studies with teacher effectiveness findings. Reflecting an input-process-output approach, typically the models emphasise the classroom level, but adopt a multilevel approach to include other levels of influence, such as school and student effects (Muijs et al., 2014).

In summary, three areas of the teacher effectiveness literature are relevant to this study, teacher characteristics, teaching practices and methods to identify teacher effectiveness. These areas of the literature are reviewed in the next section.

Teacher characteristics

Much of the early work on teacher effectiveness focused on identifying the personal characteristics of effective teachers, beginning with consideration of personality and shifting to investigation of characteristics more closely related to teaching such as teacher qualifications and experience (Creemers & Kyriakides, 2015).

The following review is limited to research concerned with teacher background characteristics such as teacher qualifications and experience, and teacher self-efficacy beliefs which are relevant to this study.

Teacher background.

Researchers have investigated relationships between student achievement and a number of teacher background variables including, teaching experience (years of teaching), academic level (formal educational degree completed), teacher certification

(receiving credentials from an authoritative source), and participation in programs (Palardy & Rumberger, 2008; Wayne & Youngs, 2003). Teacher

26

education programs include coursework which focuses on equipping teachers with knowledge and understanding of student needs, development and learning, pedagogical knowledge, and content area knowledge (Stronge, 2007).

Goldhaber & Brewer (1997) examined the association between teacher certification and teacher qualifications (degree level) with grade 10 student mathematics achievement in public schools in the United States. The results suggested students’ in classes taught by a certified and qualified teacher (Bachelor or Master’s degree in mathematics) were more likely to obtain higher test scores than students in classes of non-certified or non-qualified teachers. Similarly, Darling-Hammond, Holtzman, Gatlin

& Heilig (2005) reported significant positive correlations between teacher certification status, experience and degree level with achievement gains of fourth and fifth graders from 1995-2002 in public schools in Houston, USA. In contrast, Hanushek, Kain,

O’Brien & Rivkin (2005) reported fourth to eighth grade students’ mathematics achievement in Texas, the USA was unrelated to teacher certification or qualifications, but did report positive correlations between teacher experience and student mathematics achievement in the first year of teaching. Similarly, in a longitudinal study investigating the effects of certified versus non-certified teachers with student mathematics achievement of fourth to eighth graders in New York City public schools from 1998-2005, Kane, Rockwoff & Staiger (2008) reported no difference between student mathematics achievement for students in classrooms with certified and non- certified teachers. Consistent with Hanushek et al. (2005) and Kane et al. (2008), Leigh

(2010) reported significant positive correlations between teaching experience and

90,000 grade three to seven student achievement scores on a standardised test in

Queensland, Australia, and also reported no correlation with teacher qualifications or degree level. In contrast, a study conducted by Palardy & Rumberger (2008) which

27

examined data from a sample of 20,000 kindergarten students in the United States reported no association between education level, training, teacher experience and student mathematics achievement.

Despite mixed findings, some research evidence has suggested the relationship between teacher qualifications, experience, and student achievement may be mediated by teaching practices. For example, Guarino, Hamilton, Lockwood & Rathbun (2006) conducted a study using data from the Early Childhood Longitudinal Study,

Kindergarten Class of 1998–99 (ECLS-K) collected by National Centre for Education

Statistics (NCES) in the USA. The study examined the relationship of teachers’ background variables (teaching certification, coursework in , employment status and, teaching experience), instructional practices and student achievement (in reading and mathematics) during the kindergarten year. Using two-level hierarchical linear modelling (HLM), the researchers reported positive associations between instructional practices and gains in student achievement in both subjects, but no direct relationship between teacher qualifications and student achievement. A notable exception was teachers’ employment status (part-time and full time). The findings from the study also suggested the amount of coursework in pedagogy undertaken by teachers was positively related to instructional practices (in reading and mathematics) associated with higher student achievement in both subjects.

Teacher self-efficacy beliefs.

A large number of studies (e.g., Armor et al., 1976; Ashton & Webb, 1986;

Berman, McLaughlin, Bass, Pauly & Zellman, 1977; Gibson & Dembo, 1984; Goddard,

Hoy & Hoy, 2000; Skaalvik & Skaalvik, 2007; Tschannen-Moran & Hoy, 2001) have reported positive relationships between teacher self-efficacy, and teaching practices, enthusiasm, commitment and teaching behaviours which foster student achievement.

28

Initially, teacher self-efficacy was conceptualised in terms of locus of control

(Armor et al., 1976; Berman et al., 1977). Locus of control refers to the extent to which an individual believes internal factors, (which are controllable) or external factors

(which are not controllable) influence the outcomes of student learning (Fives, 2003;

Tschannen-Moran, Hoy & Hoy, 1998).

The second conceptualisation of teacher self-efficacy is based on social cognitive theory (Bandura, 1977, 1986, 1997). According to Bandura (1997), self-efficacy is the belief in one’s capabilities to organise and execute courses of action to attain the desired goals in a given situation. Thus, teacher self-efficacy is conceptualised as individual teacher belief in her/his ability to plan, organise, and teach so that students achieve educational goals (Skaalvik & Skaalvik, 2007).

Studies based on both conceptualisations of self-efficacy (Bandura, 1977; Rotter,

1966) have been conducted. For example, Guskey (1981) adopted the locus of control conceptualisation of self-efficacy, and developed a 30 item instrument, known as the

‘Responsibility for Student Achievement’ Questionnaire (RSA) to determine teacher internal and external control attributions with regard to student success and failure.

When Bandura’s concept of self-efficacy (1977) was introduced, several researchers

(Gibson & Dembo, 1984; Riggs & Enochs, 1990; Soodak & Podell, 1996) attempted to define and measure teacher self-efficacy by reconciling the locus of control view with bandura’s concept of self-efficacy. However, some researchers (Tschannen-Moran et al., 1998; Woolfolk & Hoy, 1990) criticised the conceptual soundness of this approach and developed a model of teacher self-efficacy consistent with social cognitive theory

(Bandura, 1977, 1986). In this model, teacher self-efficacy was defined as “teacher belief in his or her capability to organise and execute courses of action required to successfully accomplish a specific teaching task in a particular context” (Tschannen-

29

Moran et al., 1998, p. 22). These researchers developed an instrument known as the

“Teachers' Sense of Efficacy Scale (TSES)” to measure teacher self-efficacy

(Tschannen-Moran & Hoy, 2001). Although seen to be a reliable self-efficacy instrument, other scholars (Skaalvik & Skaalvik, 2007) have criticised this instrument for lacking gradation of challenge in teacher responses. Gradation of the challenge was included in guidelines provided by Bandura (1997, 2006) for the development of self- efficacy scales and refers to provision for respondents to indicate the strength of self- efficacy beliefs.

A substantial body of research has identified a link between teacher self-efficacy and teaching practices (Ashton, Webb & Doda, 1983; Berman et al., 1977; Cousins &

Walker, 2000; Geijsel, Sleegers, Stoel & Kruger, 2009; Gibson & Dembo, 1984;

Guskey, 1988; Saklofske, Michayluk & Randhawa, 1988; Stein & Wang, 1988;

Tuckman & Sexton, 1990). For example, Gibson & Dembo (1984) examined the relationship between teacher self-efficacy and teacher classroom organisation, instruction, and feedback practices. They reported differences between teachers with high and low levels of self-efficacy for these practices. For example, teachers with low self-efficacy for instruction devoted fifty percent of their time to small group instruction, while highly efficacious teachers spent twenty-eight percent of their time in small group instruction, and were more likely to spend time in whole-group instruction, monitoring, and seatwork. In addition, significant differences between high and low efficacious teachers were reported for feedback practices. Similarly, Saklofske et al.

(1988) investigated relationships between teacher self-efficacy and teacher presentation, questioning and classroom management strategies. The findings suggested significant and positive associations between teacher self-efficacy and these teaching practices.

Allinder (1994) conducted a study involving 200 teachers in the

30

USA and reported significant relationships between teacher self-efficacy and selected instructional practices such as adoption of new teaching approaches and materials, organisation, planning, fairness, enthusiasm and clarity in presentation. Specifically, highly efficacious teachers were more likely to be experimental with instruction, be business-like in organisation and student interaction, and demonstrated more confidence in lesson delivery compared to less efficacious teachers.

Further, other researchers have reported similar findings with regard to highly self-efficacious teachers and effective classroom management practices (Ashton &

Webb, 1986; Tuckman & Sexton, 1990), enthusiasm and commitment to teaching

(Coladarci, 1992; Guskey, 1984), and maintenance of on-task behaviour (Ashton et al.,

1983).

Moreover, substantial evidence has established generally, positive relationships between the degree of teacher self-efficacy with student achievement (Anderson,

Greene & Loewen, 1988; Ross, 1994; Tschannen-Moran et al., 1998), and more specifically in the discipline of mathematics (Allinder, 1995; Aston & Webb, 1986;

Caprara, Barbaranelli, Steca & Malone, 2006; Tracz & Gibson, 1986). For example,

Allinder (1995) investigated the relationship between self-efficacy levels of 19 teachers and 38 student scores in mathematics in the USA. He reported students’ in classrooms with highly efficacious teachers performed better in all parts (digits, problems, and slope) of a mathematics test, compared to students’ in classrooms with low efficacious teachers.

Teaching practices

Since the 1960s, the key finding from teacher effectiveness research is what teachers do in classrooms contribute to student academic achievement (Hattie, 2003,

2009; Stigler & Hiebert, 1999; Killen, 2013; Muijs & Reynolds, 2005). A number of

31

reviews have synthesised the findings from process-product studies on teacher effectiveness (Brophy & Good, 1986; Fraser, Walberg, Welch & Hattie, 1987; Galton,

1987; Reynolds & Muijs, 1999). More recently meta-analysis (e.g., Hattie, 2003, 2009;

Haystead & Marzano, 2009; Scheerens & Bosker, 1997; Seidel & Shavelson, 2007) has affirmed earlier findings and provided new insight into effective teaching practices. The development of theoretical models, such as the dynamic model of educational effectiveness (Creemers & Kyriakides, 2008) has emphasised the complex nature of effective teaching practices (Muijs et al., 2014).

Research findings from reviews of process-product studies.

Most of the early studies of teacher effectiveness focused on the systematic observation of teachers’ actual teaching behaviours (Dunkin & Biddle, 1974), and were referred to as process-product studies (Graham & Heimerer, 1981). Such studies were based on the hypothesis that certain teaching behaviours would positively influence student academic achievement (Graham & Heimerer, 1981; Seidel & Shavelson, 2007).

A large number of different teaching behaviours have been investigated (Seidel &

Shavelson, 2007), and the findings synthesised in reviews by several scholars (Brophy

& Good, 1986; Fraser et al., 1987; Reynolds & Muijs, 1999). Table 3.1 shows the findings from synthesized reviews of teacher effectiveness research. The most striking feature about Table 3.1 is the consistency with which some teaching behaviours have been identified. Muijs et al. (2014) categorized these behaviours into seven effective teaching practices, the opportunity to learn, time on task, classroom environment, structuring, questioning, feedback and practice (Table 3.1).

32

Table 3.1

Summary of research findings from the reviews of process-product studies Muijs et al. Review (2014) Reynolds & Muijs Fraser et al. Brophy & Good (1999) (1987) (1986) Opportunity to Opportunity to learn Instructional time Opportunity to learn learn Time on task Time allocation Time on task Student engaged time

Classroom Class management Environment effects Class management environment

Structuring Academic orientation Reinforcement Well-Structured Structured lesson Sequence lessons Sequence information Advanced organisers Clarity Mastery learning Pacing Tutoring

Questioning Questioning High order questions Asking questions Clarity of questions Wait time

Feedback Feedback Cues and feedback React to student responses

Practice Individual work Cooperative Handling seatwork programs Homework Assignments

Opportunity to learn and time-on-task.

Research on teacher effectiveness (Borg, 1979; Good, Grouws & Beckerman,

1978; Hafner, 1993; Herman & Klein, 1996) has consistently reported the positive relationship between student achievement and opportunity to learn. According to some scholars (Brophy & Good, 1986; Reynolds & Muijs, 1999), the concept of opportunity to learn refers to the amount/content of curriculum covered or taught and is associated with student engaged time or time-on-task. Time-on-task refers to the amount of time a student is engaged or attends to learning tasks and activities during the lesson as opposed to being engaged in non-learning activities, such as roll call or disciplinary

33

issues (Brophy & Evertson, 1976; Brophy & Good, 1986; Fisher et al., 1980; Stallings,

1985). Researchers (Brophy & Evertson, 1976; Brophy & Good, 1986; Creemers, 1994;

Good & Grows, 1979a, 1979b; Reynolds & Muijs, 1999) suggested maximising the rate of time-on-task is dependent on effective classroom management, where academic activities run smoothly, transitions are brief and orderly and little time is spent getting organized or dealing with inattention or resistance. In addition, time-on-task is influenced by the extent to which a teacher emphasizes academic instruction and whether learning is seen as the main classroom goal (Reynolds & Muijs, 1999).

Research evidence (Brophy & Evertson, 1976; Coker, Medley & Soar, 1980;

Creemers & Reezigt, 1996; Fisher et al., 1980; Lampert, 1988) has suggested effective teachers organise instructional activities, expedite non-instructional activities such as transitions, create a task-oriented and supportive classroom environment, and are able to maximise student engagement rates or time-on-task.

Classroom learning environment.

An extensive body of research (Brophy & Evertson, 1976; Brophy & Good, 1986;

Evertson, Anderson, Anderson & Brophy, 1980; Fraser, 1986; Griffin & Barnes, 1986;

Secada, 1992; Slavin, 1983; Slavin & Cooper, 1999; Walberg, 1979; Wubbels,

Brekelmans & Hooymayers, 1991) has documented the relationship between the quality of the classroom environment and student attainment. The classroom environment is a broad term encompassing a wide range of educational concepts and is defined in many ways based on theory as well as practice. However, more recent research (Creemers &

Reezigt, 1999; Freiberg & Stein, 1999; Sinclair & Fraser, 2002) has highlighted the importance of a teacher in creating a supportive and efficient classroom learning environment. Focusing only on the teacher contribution, classroom environment can be defined as the climate or atmosphere that is created in the classroom due to the physical

34

arrangement of the classroom, the rules and procedures of the classroom, and the way the teacher interacts with students (Creemers & Reezigt, 1999; Freiberg & Stein, 1999).

Thus, there are two factors which contribute to classroom climate. The first is the teacher’s classroom management practices such as, setting up of an appropriate physical layout of the classroom (Strivens, 1985), establishing and enforcing rules and procedures in the classroom in conjunction with persuading students to respect and abide by the rules and routines, and managing student behaviour (Doyle, 1986; Good &

Grouws, 1977; Kounin, 1970; Walberg, 1986). Good classroom management is proactive rather than reactive (Yates & Yates, 1990). It includes teacher capacity to manage student behaviour with methods which prevent or redirect misbehaviour, reinforce expected behaviour, and involves the appropriate use by students of materials and other resources (Brophy & Good, 1986; Emmer et al., 1980; Secada, 1992).

Therefore, good classroom management is distinguished by teacher expertise to create efficient learning environments with minimal disruption or misbehaviour which enhances instruction and maximises student time-on-task (Brophy & Good, 1986;

Emmer et al., 1980; Reynolds & Muijs, 1999; Secada, 1992).

Researchers (Doyle, 1986; Emmer et al., 1980; Evertson et al., 1980; Kounin,

1970; Muijs & Reynolds, 2001; Pressley, 1999; Walberg, 1991) have reported effective teachers were proactive in classroom management which facilitated and supported student engagement and learning. For example, Evertson et al. (1980) observed the teaching practices of 29 seventh and eighth-grade mathematics teachers who were deemed effective or less effective based on student achievement in two mathematical tests and student ratings. They found effective teachers were proactive classroom managers demonstrating consistent rule enforcement, monitoring, less acceptance of disruptions, fewer interruptions, transitioned efficiently and there was less misbehaviour

35

compared to teachers who were deemed to be less effective.

The second factor which shapes classroom climate is related to the psychological aspects of the environment, in particular, the relationships between teachers and students (Fraser, 1991; Wubbels & Brekelmans, 1997). Teacher-student interactions

(Levy, Wubbels & Brekelmans, 1992) refer to the consistent flow of information related to teacher and student perceptions, attitudes and feelings about each other, and the learning activities at hand during a lesson (Burns, 1982; Rogers, 1982). Teachers interact with students in many different ways in the classroom (den Brok, Brekelmans,

Levy & Wubbels, 2002). To describe different types of teacher-student interactions,

Wubbels, Creton & Hooymayers (1985) developed the Model for Interpersonal Teacher

Behaviour (MITB) based on Leary’s (1957) communication dimensions. Leary (1957) suggested communication could be analysed according to an influence dimension (he labelled dominance–submission, DS), related to who controls the communication, and a proximity dimension (he labelled cooperation–opposition, CO) related to the level of cooperation between communicators. Wubbels et al. (1985) mapped teacher–student interactions onto these two-dimensions in the MTIB model. They used the proximity dimension to represent the level of cooperation between teacher and student, and the influence dimension to represent the degree of dominance exercised by the teacher in interactions with students. The MTIB has been used by Wubbels, Levy & Brekelmans

(1997) to empirically determine the characteristics of optimal teacher-student interactions. They have reported relationships between the two dimensions (influence and proximity) with student cognitive and affective outcomes. Specifically, their findings suggested the more dominant the teacher, the higher student achievement, and the more cooperative the teacher, the more students display positive attitudes toward learning. A number of researchers, for example, Goh (1994) in Singapore, Brekelmans,

36

van den Eeden, Terwel & Wubbels (1997) in the Netherlands, have subsequently reported similar findings. Based on evidence from this work Levy, denBrok, Wubbels &

Breklemans (2003) concluded “the more that, students perceive teachers to be dominant and cooperative, the more they will achieve cognitively and affectively” (p. 6).

Other scholars (Borich, 1996; Johnson & Johnson, 1989, 1999) have identified three broad types of classroom environment reflective of different types of learning goals (including classroom rules, social and emotional atmosphere, and moment-to- moment interactions). The first is the cooperative environment, where teachers engage students in small groups and students work together to maximise their own and each other’s learning. The second is the competitive environment where students are engaged in competing among themselves to give the right answer or to attain a standard set by the teacher. The third is the individualistic environment where students work independently on tasks. While mixed evidence has been reported (Dowell, 1975, Fraser,

1986; Kolawole, 2008; Ojo & Egbon, 2005; Qin, Johnson & Johnson, 1995; Zahn,

Kagan & Widaman, 1986), generally, the cooperative environment has been shown to be associated with better student learning outcomes than competitive and individualistic environments. For example, Johnson & Johnson (1989) synthesized the findings from more than 600 research studies and reported a large effect size (.60) of cooperative learning environments on student achievement. More recently, Hattie (2009) reported similar effects sizes and noted the superiority of cooperative learning environments compared to competitive and individualistic environments.

Johnson & Johnson (1989) emphasized cooperative learning environments were associated with higher achievement and greater productivity (i.e. higher level reasoning, more frequent general of new ideas and solution), more time-on-task, and greater transfer of what is learned from one situation to another (i.e., group to individual

37

transfer), promoted the development of caring and committed relationships for every student, and benefitted psychological health and social competence, working cooperatively resulted in greater social development, social competencies, self-esteem, self-identity, and ability to cope with adversity and stress.

In summary, a positive and supportive classroom environment can ultimately lead to better student performance and behaviour. For creating an effective classroom environment, efforts of teachers focus on two major aspects of classroom: teachers’ classroom management skill comprised of managing students’ behaviour and appropriate classroom setting ensures maximizing students’ time on task for a smooth flow of the lesson, and classroom climate-the psychological aspect of the classroom is dependent on the way the teacher interacts (proximity and influence) with students.

Further, to enhance students learning, teachers use three types of classroom environment: competitive, cooperative, and individualistic, and the cooperative classroom environment is associated with enhanced student learning.

Structuring.

A large number of researchers (Anderson, 1974, Armento, 1977; Ausubel, 1968;

Bruner, 1964, 1966; Good & Grows, 1977; Smith, 1985; Smith & Sanders, 1981;

Wright & Nuthall, 1970) have shown lessons presented in structured formats enhance, student achievement, and have argued structuring assists momentum, helps students to understand content as an integrated whole, assists students to understand relationships between lesson components, and helps with retention of student attention (Creemers &

Kyriakides, 2008). For example, Smith & Sanders (1981) reported primary student achievement in social science was significantly influenced by the quality of lesson structure, and a subsequent study conducted by Smith (1985), with secondary students, reported a similar finding. Based on the findings from these two studies, Smith (1985)

38

concluded lesson structure influenced achievement across subject matter and across grade levels.

Several other researchers (Brophy & Good, 1986; Rosenshine & Stevens, 1986) suggested student attainment is enhanced when teachers structure the lesson by conducting a short review and practice of previous lessons, outlining lesson content and linking it to previous lesson content, focusing on key points or ideas, signaling transitions, and summarizing the main points at the end of the lesson. In addition to this type of structuring, several other elements have been reported including; smoothness and momentum (Kounin, 1970), clarity and enthusiasm (Smith & Land, 1981; Walberg,

1986), repetition and review (Smith & Sanders, 1981), and sequencing that mirrors the real world (Nuthall & Church, 1973). For example, Kounin (1970) examined the nature of teacher behaviour, when students of 49 first- and second-grade classrooms were engaged in teacher-led and seatwork activities. Using an ecological observation-based inquiry approach, Kounin (1970) determined smoothness and momentum were positively correlated with increased on-task student behaviour and reduced disruptions during recitation and seatwork milieus. Momentum refers to maintaining the flow of the lesson by not engaging in behaviours that slow down the pace of the lesson, while smoothness involves not interrupting the flow of activities or the instructional sequence with irrelevant or tangential information (Emmer & Evertson, 1981).

Moreover, structuring is regarded by some researchers (Graham & Heimerer,

1981) to be one of the important differences between effective and less effective teachers. This contention is supported by findings from a study conducted by Good &

Grouws (1977). In this study, Good & Grouws (1977) observed the teaching practices in fourth-grade mathematics classes of a group of effective and ineffective teachers (who were identified in terms of their students’ two previous consecutive years’ residual gain

39

scores). They reported significant differences in the structuring of lessons between the effective and ineffective teachers. In particular, effective teachers provided very clear information about what to learn and how to go about learning, whereas, less effective teachers were less clear in their structure. As a result, students’ in classes with effective teachers were able to specify what lesson follows and what preceded and students of less effective teachers were not certain what they have to do and how they should do it

(Good & Grouws, 1977).

Questioning.

Questioning is the most common form of teacher-student interaction in the classroom and is integral to effective teaching (Brophy & Evertson, 1976; Mortimore et al., 1988; Rosenshine & Stevens, 1986). In the teaching-learning process, teacher questions represent a method to communicate the content to be learned and provide guidance as to what is expected of students (Cotton, 1988; Hunkins, 1976; Leven &

Long, 1981). Teacher questioning may have a number of different purposes, for example, to ascertain prior knowledge, stimulate discussion, encourage student participation, and manage student behaviour (Brophy & Evertson, 1976; Hyman, 1979;

Mortimore et al., 1988; Rosenshine & Stevens, 1986; Wilen, 1991).

The study of questioning in teaching dates back to Plato and Socrates in 335 B.C

(Berliner, 1984; Clegg, 1987). During the 1950s and 1960s, the research on questioning focused on describing and evaluating the type of questions teacher asked in the classroom (Wilen & Clegg, 1986, Wilen 1991). Since the late 1960s (in process-product studies, for example, Buggey, 1971; Tyler, 1972) investigations of teacher questioning have focused on the relationship between discrete observable teacher questioning practices (process) and student outcomes (product) and determined the types of teacher questions and techniques that influenced student learning outcomes (Carlsen, 1991;

40

Clegg, 1987; Wilen & Clegg, 1986; Wilen, 1991).

Teachers have been reported to use academic and non-academic questions

(Cunningham, 1987). Academic questions involve content while nonacademic questions generally concern social and affective issues arising in the classroom (Brophy & Good,

1986; Stallings & Kaskowitz, 1974; Wilen, 1987). Mixed findings have been reported with regard to the frequency of using academic and non-academic questions. For example, Gall (1970) showed 80% of lesson time was devoted to academic questioning and the remaining 20 % spent on non-academic questioning. While the findings from a three-year study (Wragg, 1993) which analysed questioning by 1000 teachers in primary schools in the United Kingdom showed 43% of teacher questions were academic and 57% were non-academic.

In addition, researchers (Ozerk, 2001; Wilen, 1987) have differentiated teacher questions into ‘convergent questions’ and ‘divergent questions’ on the basis of Bloom’s

Taxonomy (Bloom et al., 1956). Questions which emphasise recalling facts where students are expected to give simple discrete closed-ended or patterned responses are referred to as convergent questions or lower order cognitive questions (Ozerk, 2001;

Wilen, 1987). Questions which emphasise application, analysis, synthesis, and evaluation are described as divergent or higher order cognitive questions (Ozerk, 2001;

Wilen, 1987). Research evidence (Barnes, 1974; Gall, 1970; Galton, Simon & Croll,

1980; Ramirez & Merino, 1990; Wragg, 1993) has consistently shown teachers are more likely to use convergent questions than divergent questions. For example, Galton et al. (1980) conducted a study of primary and secondary schools in Britain and reported

52% of teacher questions were academic and of those 29% were convergent and 23% were divergent questions. Similarly, Wragg (1993) showed 35% of teacher academic questions were convergent and only 8% were divergent questions.

41

Several studies (Brown & Edmondson, 1984; Gall et al., 1978; Mortimore et al.,

1988; Soar & Soar, 1979; Stallings & Kaskowitz, 1974) investigated the relationship between teacher questioning and student achievement, and the findings have generally been consistent. For example, Gall et al. (1978) conducted two experiments in the USA with 336 sixth-grade students who participated in ten ecology classes at the rate of one per day in which they read a textbook assignment and either engaged in an ecology related activity which required them to respond to teacher questions or engaged in ecology-related activity which did not require them to respond to teacher questions.

Both groups were tested, and Gall et al. (1978) reported students who participated in the teacher questioning group performed much better than those students who were not in this group. In a longitudinal study, Mortimore et al. (1988) reported significant positive effects of frequent teacher questioning which emphasised high order thinking and student achievement in mathematics and reading. A number of researchers (Palinscar &

Brown, 1984; Wittrock, 1981) suggested teacher questioning had a beneficial effect on student achievement when questions were motivating and kept students on task, when teacher questioning focused student attention on what is to be learned, and questions which elicited a depth of processing, activated metacognitive processes in students, and elicited further practice of the curriculum content.

However, the evidence is not conclusive and mixed findings have been reported with regard to the effects of higher and lower order teacher questioning on student achievement. For example, Buggey (1971) reported higher achievement by students taught with higher order questioning than students taught with lower order questioning.

Surprisingly, Savage (1972) replicated Buggey’s (1971) study with fifth-grade students but did not find a relationship between teacher questioning and student achievement.

Further, a review of this work conducted by Rosenshine (1976) concluded lower order

42

cognitive questions were more effective than the higher order cognitive questions in promoting student academic achievement. While contrary to Rosenshine (1976) a review of a number of experimental studies conducted by Winne (1979) concluded there was no relationship between teacher use of higher order or lower order questions. Gall

& Rhody (1987) suggested there were a number of reasons for such conflicting findings including different definitions of higher-order cognitive and lower-order cognitive questions used by researchers and the diverse range of student samples that have been used in investigations of teacher questioning and student achievement.

Despite contradictory findings with regard to higher and lower order questioning and student achievement, researchers have demonstrated that effective teachers are highly expert in the use of questioning characterized by clarity, academic orientation, at least 3-5 seconds of wait time, and high student participation rates (Berliner, 1984;

Brophy & Good, 1986; Leven & Long, 1981; Weil & Murphy, 1982).

Feedback.

Feedback is an important contributor to improving the learning outcomes of students (Biggs, 1999; Brown & Knight, 1994; Stronge, 2002). In the context of teaching and learning, feedback is referred to as information provided to a student from a teacher about her/his understanding or performance relative to a learning goal (Hattie

& Timperley, 2007). It is usually provided after teaching in order to correct the misconception (Timperley, 2013). Several researchers (Biggs, 1999; Ramaprasad, 1983;

Sadler, 1989, 1998; Tittle, 1994) have linked feedback to formative assessment processes. Receiving feedback in the context of assessment enables the student to extend understanding, closing the gap between current and desired level of understanding (Timperley, 2013). Nonetheless, empirical evidence (Walberg, 1982;

Wilkinson, 1981) has shown the effect of feedback may vary. Feedback which has a

43

positive effect gives the student information about the task, her/ his performance relative to the task and self-regulation of learning (Timperley, 2013).

According to Hattie & Timperley (2007) feedback is effective when it provides the student with knowledge of (1) where s/he is going, (2) how s/he is going, and (3) where s/he needs to go to next. However, research evidence has suggested feedback is more likely to be positively associated with student learning when, (1) it matches the student’s level of understanding (Hattie & Yates, 2014). For example, novice learners need feedback which is task related because they are acquiring basic knowledge, and need assurance and correction. While intermediate learners need feedback which is progress related because they are beginning to link, relate and extend ideas or move to deeper levels of understanding, and need affirmation they are using the right learning strategies or be provided with suggestions for alternatives, and advanced learners need feedback which supports self-regulation, (2) it is given in the form of written comments rather than in the form of grades, marks for pieces of assessment (Black & Wiliam,

1998; Butler, 1998; Crooks, 1988, Elawar & Corno, 1985), and (3) it is timed “to provide just in time, just for me and just for where I am in my learning process”

(Hattie, 2012, p. 137).

A study conducted by Ayres, Dinham & Sawyer (2001), of highly successful senior secondary teachers in NSW public schools demonstrated the important relationship between feedback and student achievement. Based on analysis of data gathered in various disciplines, the findings showed highly successful teachers gave timely, frequent, high-quality, focused, and constructive feedback to students. In addition, teachers provided comprehensive feedback on student written work and gave informal feedback to students individually and collectively through observation and comment during class and responses to questioning. Thus, teachers were able to monitor

44

student progress and performance.

Practice.

Practice is considered an essential part of learning (Helmke & Schrader, 1988;

Topping, Samuels & Paul, 2007). From a cognitive perspective, practice “is seen as facilitating the transfer from working memory to long-term memory, so consciously controlled processing becomes automatic processing” (Topping et al., 2007, p. 254).

The main purpose of practice is the consolidation of newly acquired knowledge and skills, and the strengthening of automaticity (Anderson, 1980; Helmke & Schrader,

1988; Travers, 1982). Helmke & Schrader (1988) argued a certain level of automaticity is an important prerequisite for further learning as it provides a student with more capacity to attend to learning or the transfer of learning to new contexts.

Research (Rosenshine, 1986, 1987) had shown teachers’ may use two types of practice activities: guided practice and independent practice. Guided practice begins after the content is taught and occurs when students are guided by the teacher through practice activities to reinforce learning, provide reassurance and opportunity to correct student misconceptions (Guzzetti, Snyder & Glass, 1992; Rosenshine, 1987, 1995).

Once sufficient guided practice (when most students can work without error) has occurred, opportunities for students to practice independently are provided to enable them to acquire confidence, smooth levels of performance in relation to the learning goals determined by the teacher (Rosenshine, 1986, 1987, 1995).

Despite evidence that teachers use these two types of practice researchers

(Cepeda, Pashler, Vul, Wixted & Rohrer, 2006; Silverman, 1990) have reported simply providing practice activities may not contribute to student learning, and it is “the right kind of practice that matters” (Simon, 2013, p. 411) for student learning. For practice to be effective, practice activities must be goal directed, appropriately challenging,

45

sufficient, and frequent enough to meet demands of expected performance levels

(Ambrose, Bridges, DiPietro, Lovett & Norman, 2010; Helmke & Schrader, 1988;

Topping et al., 2007).

Several scholars (Ambrose et al., 2010; Ericsson, Krampe & Tesch-Romer, 1993) have contended goal-directed practice activities (i.e. deliberative practice) help students to focus time and energy on what needs to be learned. These contentions have been empirically supported. For example, Rothkopf & Billington (1979) reported a study which investigated 221 high school and college students, learning a 1481 worded passage to achieve specified goals each relevant to a single readily identifiable sentence in the passage. The findings from this study suggested students who read with the objectives in mind spent more time or attention on sentences relevant to the objectives and less time on sentences not relevant to the objectives than did students who read without keeping the objectives in mind.

Further, Ambrose et al. (2010) and Ericsson et al. (1993) reported practice/ application activities which are appropriately challenging, that is neither too hard nor too easy relative to students’ current performance are positively associated with increased student performance. Empirical support for this finding is illustrated in an experimental study conducted by Clarke, Ayres & Sweller (2005) which examined the effect of a designed instructional unit on 24 students in year nine learning how to use spreadsheets. The students in the sample had similar mathematical abilities, but had different spreadsheet abilities (14 had high spreadsheet knowledge and 10 had low spreadsheet knowledge), and were divided into a sequential instructional group and a concurrent instructional group. In the acquisition phase, the instructional format for the sequential group was to develop necessary spreadsheet skills, followed by application of the skills to learn mathematics, whereas the instructional format for the concurrent

46

group was to learn spreadsheet skills and apply the skills to learning mathematics simultaneously. All students were assessed by a mathematical test following the acquisition phase. Based on student test scores, the researchers reported students with low spreadsheet knowledge scored higher, if they received sequential instruction compared to concurrent instruction, as the concurrent method of developing necessary skills and applying them simultaneously was found as too demanding which inhibited learning. On the other hand, students with high spreadsheet ability showed the opposite pattern. Clarke et al. (2005, p. 15) concluded instruction including, application/practice activities should be designed to facilitate the acquisition of knowledge in long-term memory while reducing the unnecessary demands on working memory.

In addition, other researchers (Ericsson & Charness, 1994; Ericsson & Lehmann,

1996) have suggested there must be a sufficient amount practice (Silverman, 1993), and engaged time in practice activities, if the practice task is to make a positive contribution to learning. With regard to the amount of practice, two types of practice can be distinguished, spaced or distributive practice, and massed practice (Murray &

Udermann, 2003). Distributive practice refers to practice in which the amount of time between practice activities is long relative to the length of the practice activity. Massed practice refers to practice activities in which the amount of time between activities is relatively short compared to the time practice activity takes (Schmidt, 1991).

Researchers (Bloom & Shuell, 1981; Donovan & Radosevich, 1998) revealed differential effects between distributive practice and massed practice. For example,

Bloom & Shuell (1981) compared the effects of distributive practice and massed practice by randomly assigning high school students to two groups, a distributed practice, and a massed practice group. Each group was required to learn 56 new words.

The distributed practice group engaged in three 10-minute practice sessions on three

47

successive days and the massed practice group engaged in a 30-minute practice session on a single day. The groups were tested 4 days later, and the researchers reported students in the distributed practice group performed significantly better than students in the massed practice group (Bloom & Shuell, 1981).

In summary, it is apparent from the research reviewed that teachers engage students in different types of practice, but merely allocating time to practice is likely to have little influence on student learning, unless practice is goal directed, appropriately challenging, sufficient and frequent enough to enable a student to reach the expected level of performance.

Summary of research findings from process-product studies.

The findings from reviews of process-product studies have identified a range of teaching practices such as opportunity to learn and time-on-task, classroom environment, structuring, questioning, feedback, and practice positively associated with student achievement. However, the process-product studies have been criticized for a singular focus on generic teaching practices associated with cognitive student outcomes on standardized tests (Muijs et al., 2014).

Research findings from meta-analyses.

Recent meta-analytical work has substantially advanced understanding of teacher effectiveness (Muijs et al., 2014). Consistent with prior reviews, meta-analytic studies have confirmed teachers as the strongest contributor to student academic achievement

(Muijs et al., 2014).

Seidel & Shavelson (2007).

Seidel & Shavelson (2007) suggested the identification of some teaching practices and effect sizes were likely to be different as researchers had adopted different teaching and learning models (i.e., process-product or cognitive processing models), different

48

research designs focused on different student outcomes (cognitive, learning processes, and motivational-affective outcomes) in previous research. To confirm their hypothesis,

Seidel & Shavelson (2007) re-analysed studies conducted between 1995-2004, and reported variation in effect sizes for teaching practices and student outcomes between

(.01 to .04) for studies based on the process-product model, and effect sizes (.01 to .21) for studies based on the cognitive processing model. Further, Seidel & Shavelson

(2007) developed a framework, which included teaching practices from both process- product and cognitive processing models and used it to re-analyse previous process- product studies. They reported most effective teaching practices were related to execution of domain-specific learning activities which had a moderate effect size on cognitive outcomes, and high effect sizes for learning processes and motivation- affective outcomes (Seidel & Shavelson, 2007)

Haystead & Marzano (2009).

Haystead & Marzano (2009) conducted a meta-analysis of 329 in-school interventions to explore the effect of instructional strategies on student learning. Their findings suggested 15 effective instructional strategies including advance organizers, building vocabulary, effort, and recognition, feedback, graphic organizers, homework, identifying similarities and differences, interactive games, non-linguistic representations, note taking, practice, setting goals and objectives, student discussion/chunking, tracking student progress, and scoring scales collectively had an overall effect size of 0.42 which was associated with a 16-percentile-point gain (for example, from 50th to 34th) in student academic achievement. However, differential effects were reported between the instructional strategies. The most effective instructional strategies were, tracking student progress and scoring scales (1.00), interactive games (0.53), identifying similarities and differences (0.52), building

49

vocabulary (0.51), non-linguistic representations (0.44), note taking (0.44), and student discussion/chunking (0.43).

Hattie (2009).

Hattie’s meta-analysis is based on “more than 800 meta-analyses of 50,000 research articles, about 150,000 effect sizes, and about 240 million students” (Hattie,

2012, p. 2). It is the most comprehensive and influential meta-analysis in the teacher effectiveness field. From his meta-analysis, Hattie (2009) distilled 138 factors that influence student achievement and calculated the effect sizes. He ranked the effect sizes and referred to the average effect size (d = 0.4) as the hinge point to differentiate effective from ineffective teaching practices, and argued teaching practices above the hinge point were likely to contribute the most to student learning and achievement

(Hattie, 2009).

Based on the results of the meta-analysis, Hattie (2009) posited the key to making a difference is to make teaching and learning ‘visible’ to student and teacher, and argued, ‘the more the student becomes the teacher and the more the teacher becomes the learner, then the more successful are the outcomes (Hattie, 2009, p. 25). According to

Hattie (2009, 2012) visible teaching and learning involves; (1) making learning intentions explicit and transparent, setting goals at an appropriate level of challenge, (2) sharing success criteria to attain learning intentions, (3) providing deliberate practice for mastery of learning intentions, (4) seeking and providing feedback, and (5) actively participating in the act of learning. Hattie (2009, 2012) grouped effective teaching practices (d ≥ 0.4) into five broad visible teaching and learning categories, namely, learning intentions, success criteria, feedback, student perspectives in learning and student metacognitive learning.

50

Learning intentions.

Learning intentions refer to what students are supposed to learn, understand, and be able to do within a lesson (Hattie, 2012). For the teacher, it provides guidance about what to teach, how to teach, and forms the basis for assessing student learning during a lesson. According to Hattie (2009), the most effective teaching practices associated with the category of learning intentions are goals (d = 0.56), concept mapping (d = 0.57) and advance organisers (d = 0.41). Extensive evidence (Locke & Latham, 1990) has indicated the importance of goals for improving performance, and in particular, ensuring challenging rather than ‘do your best’ goals have been set. Hattie (2009) suggested challenging goals were more effective because they provide a clear idea about success and help focus student attention on necessary behaviours for success. Concept mapping is used to summarise main ideas to be learned. It is effective when students have some knowledge of the concepts to be mapped (Hattie, 2009). Advance organisers link prior learning to new knowledge and assist students to organise and interpret new information (Hattie, 2009).

Success criteria.

Success criteria refer to the criteria used to evaluate student performance and to determine whether the stated learning intentions have been achieved. According to

Hattie (2009), success criteria must be clear and specific to enable a student to monitor learning progress. He noted three strategies with high effect sizes which emphasize success criteria, namely, mastery learning (d = 0.58), Keller’s PIS (d = 0.53), and use of worked examples (d = 0.57).

Feedback.

Hattie (2009) reported feedback was a powerful moderator of learning (d = 0.73), and at the same time, it was the most variable. For Hattie (2012) and others (Sadler,

51

1989) feedback ‘aims to reduce the gap between where the student is and where she/he is meant to be’ (p. 129). According to Hattie (2009), feedback from the student to the teacher is powerful because it enables the teacher to determine student understanding, facilitating the planning of next steps in learning, and feedback from the teacher to the student is powerful if it enables her/him to answer three questions: Where am I going?

How am going? and Where do I go to next? Hattie (2009), reported the most effective feedback strategies were formative evaluation (d = 0.90) and questioning (d = 0.46).

Student perspective of learning.

Hattie (2009) has argued teachers must adopt student perspectives of learning, as this enables them to construct meaningful learning experiences and appropriate feedback such that, ‘each student is able to move progressively through the curriculum’

(p. 238). According to Hattie (2009), this involves ‘teachers seeking evidence about the effectiveness of their teaching, looking for errors in thinking, seeing how students build on prior knowledge and conceptions of learning, asking whether there sufficient challenge and engagement exist in learning and understanding the strategies students are using when learning and confronting difficulties’ (p. 253). It should not be surprising that Hattie (2009) identified time on task and opportunity to practice as important from the student perspective. Further, high effect sizes were reported for spaced vs. massed practice (d = 0.71) and peer tutoring (d = 0.55). Peer tutoring refers to the use of peers as co-teachers (of others and themselves). When students become teachers of other students, and themselves it enhances their self-regulatory skills and control over learning (Hattie, 2009).

Metacognitive strategies.

Metacognitive strategies refer to fostering students’ metacognition that involves high order thinking and reasoning activities in learning (Hattie, 2009). Metacognition is

52

an important element of problem solving and self-regulated learning. Most studies have examined programs specifically designed to enhance student self-regulated learning and metacognition. Meta-cognitive strategies are associated with increased academic achievement, with high effect sizes (d=0.50) reported in meta-analyses of these studies

(Dignath & Buettner, 2008; Hattie, 2009). Hattie, Biggs & Purdie (1996) found the most effective metacognitive strategies were those situated in context, using tasks within the same domain as the content, facilitating high levels of learner activity and meta- cognitive awareness (Muijs et al., 2014). Hattie (2009) identified high effect sizes for self-verbalization/self-questioning (d = 0.64) and study skills (d = 0.59).

Based on the findings from the meta-analysis, Hattie (2009) concluded learning was a function of the clarity and value of learning intentions, success criteria, the use of multiple teaching strategies which emphasized feedback to students at appropriate levels, adopting a student perspective and teaching students study skills and strategies of learning (p. 199).

Criticisms of meta-analyses findings.

Generally, the findings from meta-analyses provide support for previous findings on teacher effectiveness and added new insights to the field. However, several criticisms in regard to meta-analytic methods generally, and Hattie’s (2009) study have been raised (Muijs et al., 2014). The first is the difficulty of combining studies in a field characterised by a lack of consensus about concepts and how to measure them (Muijs et al., 2014). This problem is confounded when results from separate meta-analyses are combined because researchers adopt different practices in conducting a meta-analysis

(Muijs et al., 2014). The second issue is that meta-analytical methods only enable measures of direct effects, and as a result, the extent to which factors interact or more distal factors (e.g., school organisation) contribute are either, underestimated or ignored

53

(Muijs et al., 2014). In addition, to these issues, the effect sizes reported by Hattie

(2009) are generally much higher than those reported by other researchers (e.g.,

Kyriakides et al., 2013a; Scheerens & Bosker, 1997; Seidel & Shavelson, 2007). Hattie

(2009) suggested his effect sizes were higher than those reported by Seidel & Shavelson

(2007) because they used studies which controlled for student characteristics. However,

Scheerens (2014) proposed a greater variation in processes and outcomes may have contributed to higher effect sizes in Hattie’s work, and advised cautious interpretation of meta-analytical findings.

Theoretical models of educational effectiveness.

During the last three decades, several researchers (Creemers, 1994; Creemers &

Kyriakides, 2006; Scheerens & Creemers, 1989) have responded to criticisms of teacher effectiveness research by ‘integrating teacher effectiveness factors and findings from school effectiveness to develop theoretical models’ (Muijs et al., 2014, p. 243).

Adopting an input-process-output framework, the models include multiple factors and levels but focus on the classroom level as the crucial process factor with direct effects on student outcomes (Muijs et al., 2014).

For example, Scheerens & Creemers (1989) developed a model based on teacher effectiveness findings, which included the policy, school and classroom levels of educational effectiveness. Using this model, Creemers (1994) developed the comprehensive model of educational effectiveness, considered to be one of the most influential theoretical models in the field of educational effectiveness (Teddlie &

Reynolds, 2000). Briefly, the main characteristics of the model are that it includes multiple levels (i.e., context, school, classroom, and student) (Creemers & Kyriakides,

2012). Each level is comprised of factors that interact to influence student achievement.

The model specifies factors operating at each level, and identifies direct and indirect

54

relationships between levels and student achievement outcomes (Creemers &

Kyriakides, 2006). In this model, factors at the context and school level indirectly influence student achievement through the quality of instruction, time for learning, and opportunity to learn at the classroom level. The model stipulates three factors at the classroom level related to the quality of instruction; teacher behaviour, curriculum, and grouping procedures, thus teachers are central to instruction. The teaching factors are mediated by time on task and opportunity to learn at the student level, which are situated at the individual level in the model (Creemers & Kyriakides, 2012).

Evidence from research (e.g., Creemers, 1999; Kyriakides, 2005) has provided support for the comprehensive model (Creemers, 1994). However, the model has been criticised for lacking clear conceptualization for measuring the dynamic nature of educational processes, ignoring newer goals of education, such as self-regulated learning and meta-cognition, and for the difficulties encountered in testing the model empirically (Creemers & Kyriakides, 2012). The dynamic model of educational effectiveness (DMEE) was developed by Creemers & Kyriakides (2006) to address these limitations. In the DMEE, outcomes were more broadly defined to include new goals of education such as non-cognitive and self-regulated learning and related theories of teaching and learning were adopted in the model to specify constructs associated with quality teaching (Creemers & Kyriakides, 2012). Additionally, the dynamic model provides guidance for policymakers, leaders and teachers to improve practices by considering the optimal fit of factors identified within the model and current system, school or classroom contexts (Creemers & Kyriakides, 2012).

The dynamic model of educational effectiveness (DMEE).

The DMEE displayed in Figure 3.2 has several characteristics. First, the dynamic model is multilevel, and factors operating at four different levels (i.e., student,

55

classroom, school, and system) are assumed to, directly and indirectly, influence student achievement. Figure 3.2 shows system-level factors influence teaching and learning formally through the development and evaluation of national and/or regional level policy on teaching and school learning environments (Creemers & Kyriakides, 2012).

Below this level, Figure 3.2. shows school-level factors, such as the development and evaluation of policies on teaching and the school learning environment shape the context in which teaching and learning occur. However, it is classroom level factors

(see Figure 3.2) which are emphasized in the DMEE, particularly the instructional role of the teacher (Creemers & Kyriakides, 2012). Thus, only teaching factors (i.e., orientation, structuring, modelling, practice, questioning, time management, assessment,

Figure 3.2. Dynamic model of educational effectiveness (DMEE). (Source: Creemers & Kyriakides, 2012)

56

and classroom environment) associated with student achievement, and related to the instructional role of the teacher are considered in the DMEE (Creemers & Kyriakides,

2012). Figure 3.2 shows at the student level, the DMEE includes two categories of student background characteristics; sociocultural and socioeconomic, and psychological which have been found to explain most of the variance in student learning and achievement (Hattie, 2009). Further, the DMEE is based on the assumption that the factors not only interact within levels, but also dynamically interact across levels contributing to student cognitive, affective, psychomotor and new learning outcomes

(Creemers & Kyriakides, 2012).

Teaching factors.

Second, at the classroom level, it is assumed teachers contribute to student learning and achievement based on teacher effectiveness research findings (e.g., Brophy

& Good, 1986; Muijs & Reynolds, 2001a; Rosenshine & Stevens, 1986. In particular, the instructional role of the teacher is emphasised, and therefore eight teaching factors

(i.e., orientation, structuring, questioning, modelling, application (or practice), assessment, time management, and classroom as a learning environment) associated with student learning and achievement and related to teacher instructional role are included in the model (Creemers & Kyriakides, 2012).

Orientation refers to providing learning goals for a lesson or series of lessons and/or challenging students to identify the rationale for which certain activities take place (Panayiotou et al., 2014). Orientation activities make lessons meaningful and encourage high levels of student engagement and participation (De Corte, 2000).

Structuring refers to teacher behaviours which begin with an overview and/or review of goals, outline content to be covered, signal transitions, focus attention on main ideas, and review the lesson. In addition, it refers to teacher capacity to gradually

57

increase the level of challenge in lessons (Panayiotou et al., 2014).

Questioning is characterized by elements, namely, a mix of product (simple response) and process (complex response) questions, pause length for responses, question clarity, questions at an appropriate level of difficulty and responding to student answers (Panayiotou et al., 2014).

Teaching modelling is based on research (Grieve, 2010; Kyriakides, Campbell &

Christofidou, 2002) that has shown effective teachers assist students to use and/or develop problem-solving strategies. Teaching modelling is defined in terms of the role the teacher plays in helping students to use a problem-solving strategy. Thus, teachers may present clear problem-solving strategies or invite students to explain how they would solve a particular problem (Panayiotou et al., 2014).

Application (or practice) activities have been shown to contribute to student learning (Helmke & Schrader, 1988; Topping et al., 2007). In addition, to the frequency of practice tasks, application in the dynamic model refers to, whether the provided application tasks are practice based (repeat what has been learned in the lesson) or application based (set at a more complex level than the lesson) (Panayiotou et al., 2014).

Classroom learning environment encompasses five aspects of teacher generated interaction, specifically designed to create the classroom as a learning environment: teacher-student interaction, student-student interaction, student treatment by the teacher, fair competition among students, and dealing with classroom misbehaviour (Panayiotou et al., 2014). Research (den Brok, Brekelmans & Wubbels, 2004) has shown the first two components contribute to classroom climate, but the dynamic model focuses on the type of interactions and the effect teacher initiatives have on establishing on-task student behavior (Panayiotou et al., 2014). The other components focus on teacher efforts to establish an effective learning environment (Walberg, 1986).

58

Opportunity to learn and time on task are important contributors to student learning (Emmer & Evertson, 1981). Effective teachers are able to organize and manage classrooms for learning and student engagement (Creemers & Reezgit, 1996). An important element of time management is the capacity of the teacher to efficiently handle classroom disruption with minimum effect on teaching and learning time

(Panayiotou et al., 2014).

Assessment is an important part of teaching, and recent meta-analysis findings have reported the significant contribution of formative assessment to student learning outcomes (Hattie, 2009). Assessment in the dynamic model refers to the frequency of administering various forms of assessment, use of formative assessment and reporting to parents (Panayiotou et al., 2014).

The eight teacher factors define an integrated approach to quality teaching. Thus, teaching skills associated with direct teaching and mastery learning (structuring, and questioning) and constructivist approaches (orientation and modelling constructivism are included in the model. Creemers & Kyriakides (2012) have argued the integrated approach is consistent with principles of teaching for understanding and promote achievement of new goals in education such as self-regulated and metacognitive learning.

Measurement dimensions of teaching factors.

Third, in the dynamic model (see Figure 3.2) the effectiveness of factors operating at the system, school, and classroom levels may be determined by examining five dimensions: frequency, focus, stage, quality and differentiation. The frequency dimension provides a quantitative measure of the functioning of effectiveness factors, and the other four dimensions (i.e., focus, stage, quality, differentiation) determine qualitative functioning, and at the classroom level ‘help to describe the complex nature

59

of effective teaching’ (Panayiotou et al., 2014, p. 131).

Frequency refers to how often an activity associated with the factor is present in a classroom/school/system. At the classroom level, the frequency is measured by taking into account the number of tasks and the duration of tasks associated with each teaching factor.

The focus dimension considers that activities may have single or multiple purposes and range from specific to general. Research findings (Schoenfeld, 1998) have shown activities that reinforce a single purpose are likely to be successful but have small effects as other purposes may not be accomplished and activities may be isolated.

At the same time, if activities target multiple purposes there is a chance that specific purposes may not be addressed (Kyriakides, Creemers & Antoniou, 2009).

At the classroom level, stage refers to the period of time in which a teaching activity takes place (Kyriakides et al., 2009). The inclusion of this dimension is based on research (Creemers, 1994; Gray et al., 2000; Slater & Teddlie, 1992) that has emphasised the importance of the continuity of activities over a long period of time to ensure high effects in terms of student learning (Kyriakides et al., 2009).

The quality dimension refers to the properties of the factor itself as outlined in the literature (Kyriakides et al., 2009). It is concerned with functioning and clarity of the factor for students, and the influence on student engagement (Creemers & Kyriakides,

2006, 2008; Kyriakides et al., 2009).

Differentiation refers to the extent to which activities associated with factors are implemented in the same way for all students, teachers, and school. This dimension takes into account findings of research (e.g., Campbell, Kyriakides, Muijs & Robinson,

2003) that have highlighted system, school, and classroom differences in terms of educational effectiveness (Kyriakides et al., 2009). At the classroom level,

60

differentiation of teaching factors is likely to be determined by learning needs and background characteristics of students (Kyriakides et al., 2009).

Developmental levels of teaching skill.

Fourth, in the dynamic model emphasis is placed on the interrelated nature of teacher factors and the importance of grouping factors. Based on a longitudinal study,

Kyriakides et al. (2009) showed the eight teacher factors and their five measurement dimensions could be grouped into five developmental levels of teaching skills. The levels of teaching skills are shown in Table 3.2.

Further, Kyriakides and his colleagues reported the levels of teaching behaviour were associated with student outcomes, and teachers who demonstrated higher levels of teaching behaviour were found to be more effective. Antoniou & Kyriakides (2013) contended these findings were consistent with stage models of professional development and an important contribution to the field because the content of each level

(i.e., of teaching skills) was clearly delineated.

The first three levels are associated with factors related to direct and active teaching and gradually move from basic requirements (e.g., time management, structuring) to advanced requirements (e.g., asking process and product questions, giving feedback) (Antoniou, Kyriakides & Creemers, 2015; Kyriakides et al., 2009).

Teaching skills also move from teacher-centred to more student-centred approaches that engage students more actively (Antoniou et al., 2015; Kyriakides et al., 2009). The last two levels (four and five) are more challenging for teachers because they must differentiate instruction, and demonstrate the capacity to utilize new teaching approaches (Antoniou et al., 2015; Kyriakides et al., 2009).

In Table 3.2, level one teaching skills associated with the basic elements of the direct teaching (structuring, application, and questioning, time management, teacher-

61

Table 3.2

Developmental levels of teaching skill

Level Descriptor Teaching skill 1 Basic elements of direct teaching Frequency management time Stage management of time Frequency structuring Frequency application Frequency assessment Frequency questioning Frequency teacher-student relation 2 Incorporating aspects of quality in Stage structuring direct teaching and touching on Quality application (practice) active teaching Stage questioning Frequency student relations Focus application (practice) Stage application (practice) Quality of questions 3 Acquiring quality in active/direct Stage student relations teaching Stage teacher-student relation Stage assessment Frequency teaching modelling Frequency orientation Focus student relations Quality feedback Focus questioning Focus teacher-student relation Quality structuring Quality assessment 4 Differentiation of teaching Differentiation structuring Differentiation time management Differentiation questioning Differentiation application Focus assessment Differentiation assessment Stage teaching modelling Stage orientation 5 Achieving quality and Quality teacher-student relation differentiation in teaching using Quality student relations different approaches Differentiation teacher-student relation Differentiation student relations Focus orientation Quality orientation Differentiation orientation Quality of teaching modelling Focus teaching modelling Source: Creemers & Kyriakides, 2012

62

student relations) approach and the quantitative characteristics of teaching factors were identified. Teachers at this level are able to effectively use daily teaching routines such as keeping students on task, structuring content, asking questions, setting practice activities and assessment of student work (Kyriakides et al., 2009).

At level two, qualitative aspects of structuring, practice and questioning related to direct teaching were reported (see Table 3.2). Three dimensions of application (focus, stage, and quality) are indicative of teaching skill at this level, with the exception of differentiation (Kyriakides et al., 2009). Also indicative of this level of teaching skills is the quality of teacher questioning, i.e., teachers phrase product and process questions appropriately for students. As well as being able to demonstrate quality with regard to direct teaching, teachers at this level are able to encourage interactions with students which facilitate student engagement in learning (Kyriakides et al., 2009).

The eleven teaching skills at level three in Table 3.2 are related to active teaching

(Kyriakides et al., 2009). For example, the focus and quality dimensions of assessment, structuring and questioning were reported at this level. Teachers at this level are able to create positive learning environments with the exception of differentiation. Teachers also provide constructive feedback for student answers enhancing the establishment of a positive learning environment. New teaching skills at this level include the frequency of modelling and orientation associated with new teaching approaches. Thus, teachers at this level are able to use strategies related to direct and active teaching and use teaching techniques associated with constructivism (Kyriakides, Archambault & Janosz, 2013).

At level four, teaching skills are mainly concerned with the differentiation dimension of factors associated with direct teaching (see Table 3.2). Teachers at this level can differentiate student learning needs and are able to offer appropriate application, structuring, questions, and assessment based on differentiated needs of

63

students (Kyriakides et al., 2009). Further, teachers at this level, demonstrate the ability to engage in orientation and modelling tasks at appropriate times during the lesson.

The nine teaching skills at level five are concerned with qualitative characteristics of factors related to both direct/active teaching and new teaching approaches

(Kyriakides et al., 2009). For example, the quality and differentiation dimensions of the classroom environment factor are associated with direct/active teaching, and the focus, quality and differentiation dimensions of orientation and modelling are associated with new teaching approaches. Thus, teachers at this level use different teaching approaches and incorporate qualitative characteristics of these approaches into their practice

((Kyriakides et al., 2009).

Empirical support for the DMEE.

While it is acknowledged there is mounting evidence to support the validity of the

DMEE (e.g., Antoniou & Kyriakides, 2011, 2013; Creemers & Kyriakides, 2010;

Creemers, Kyriakides & Antoniou, 2012; Kyriakides & Creemers, 2008, 2009;

Kyriakides, Creemers, Antoniou & Demetriou, 2010; Kyriakides et al., 2013, 2013a;

Panayiotou et al., 2014), this section is limited to discussion of evidence for classroom level factors, in particular, research related to teacher factors.

The DMEE is based on the assumption that eight teacher factors contribute to student achievement, and five dimensions can be used to determine the functioning of these factors. Several studies have provided evidence to support this assumption. For example, a longitudinal study by Kyriakides & Creemers (2008) investigated teacher effectiveness and year five student achievement in three subjects (mathematics, Greek language, and ) in fifty Cyprus primary schools. The findings from this study confirmed teaching factors could be defined in terms of the five dimensions

(frequency, stage, focus, quality and differentiation) of the DMEE, and emphasized the

64

importance of using the five dimensions of teacher factors to examine variation in student cognitive and affective outcomes (Creemers & Kyriakides, 2012).

A further study (Kyriakides & Creemers, 2009) investigated teacher factors in the

DMEE and student achievement at the end of pre-primary school in Cyprus. The findings from this study indicated all teacher factors were related to student achievement in mathematics and language. Kyriakides & Creemers (2009) compared these findings with those in the first study (2008) and reported while all factors were related to student achievement at both phases of schooling, some factors were more important for pre-primary than for primary and vice versa. They suggested the differential effects were likely related to the developmental stage of students and to the functioning and curriculum at different phases of schooling. Therefore, the generic nature of teacher factors was supported (Creemers & Kyriakides, 2012). Another follow-up study conducted in the same schools by Creemers & Kyriakides (2010) found similar findings with regard to relationships between teacher factors and student achievement. In addition, the findings from a meta-analysis based on 167 studies of teacher effectiveness indicated the eight teacher factors in the DMEE were moderately associated with student learning and achievement, while teacher factors not included (in the DMEE) were weakly associated with the gains in student achievement (Kyriakides et al., 2013a).

The DMEE is based on the assumption that teacher factors and dimensions are interrelated and the model emphasises the grouping of teacher factors to explain gains in achievement. This assumption has been supported by several recent studies. A cross- sectional study, conducted in Canada by Kyriakides et al. (2013) showed teacher skills could be grouped into four types of behaviour that were distinctively discernible, and moved from skills related to direct teaching to more advanced skills associated with

65

newer teaching approaches. Although evidence did not support the interrelatedness of teacher factors, the findings did provide support for a developmental level model of teacher skills, with teachers moving from simple to more advanced types of teaching behaviour (Creemers & Kyriakides, 2012).

Further, Antoniou (2013) utilised an experimental longitudinal design to investigate the extent to which the same teachers can be classified into the same developmental levels at three different points in time. The results from this study provided support for the internal validity and generalisability of the five developmental levels of teacher skills, with specific content based on the DMEE. The findings also affirmed stage models (e.g., Berliner, 1994), and contributed new insight by identifying specific teacher skills associated with each developmental stage. Antoniou (2013) concluded the first three stages (or levels) are related to direct and active teaching but gradually move from basic requirements of quantitative characteristics of teaching activities to more advanced use of skills, as determined by the qualitative characteristics of teaching activities. Teacher skills associated with the last two levels are more demanding, as teachers are expected to differentiate and utilise new teaching approaches

(Antoniou, 2013). Other researchers (Antoniou & Kyriakides, 2011; Kyriakides et al.,

2009) have reported teachers who have skills at higher levels (level 4 and 5) were more effective than those at lower levels in terms of their students’ learning and achievement.

Based on research evidence on the developmental stages of teaching skills,

Antoniou & Kyriakides (2011) developed the Dynamic Integrated Approach (DIA) to teacher professional development. The DIA takes into account the importance of recognizing the fact that each teacher/group of teachers with similar background (i.e., teaching experience, initial training qualifications, duties) may have different needs and priorities for professional improvement and may need to concentrate their efforts

66

towards the development of different skills in the teacher developmental stage. Thus, the authors argued that teacher training and professional development should focus on addressing specific groupings of teacher factors associated with improvement in student learning rather than focus on an isolated teaching factor each time, as proposed by the competency-based approach (CBA) (Thomson, 1991; Whitty & Willmott, 1991), or include the whole range of factors as implied by the holistic approach (HA)

(Calderhead, 1989; Clift, Houston & Pugach, 1990). Antoniou & Kyriakides (2013) provided empirical support for their argument from a study of 130 Cypriot primary school teachers and 2356 students. In this study, they investigated the influence of the

DIA on student mathematics achievement and teaching skills compared to the holistic approach. At the beginning of the study, teaching skills of all the teachers and mathematics achievement of the students were assessed. Then teachers were randomly allocated into two groups of intervention: DIA and HA. The DIA intervention group of the teachers were provided critical reflection which focused on teaching skills of the dynamic model corresponding to teacher developmental stage and needs. The other group of teachers employed a holistic approach (HA) to develop action plans addressing teaching factors corresponding only to their developmental stage. At the end of study, the teaching skills and student achievement were again measured and Antoniou et al.,

(2013) found teachers who employed the DIA made significant progress in developing teaching skills and were more effective than those who employed the holistic approach.

The generalizability of the DMEE to other western educational contexts was recently investigated in a cross-national longitudinal study conducted by of Panayiotou et al. (2014). The study was conducted in six European countries: Belgium, Cyprus,

Germany, Greece, Ireland and Slovenia to investigate the extent to which the teaching behaviours included in the DMEE were associated with student achievement in

67

mathematics and science achievement in these six diverse educational contexts. The findings revealed the eight teacher factors were significantly related to student achievement gains in mathematics and science in all participating countries. Further, the reliability and validity of using student ratings to measure the functioning of teacher factors in the DMEE were supported (Panayiotou et al., 2014).

Most recently, the DMEE was used in the context of developing country particularly in Ghana by Azigwe, Kyriakides, Panayiotou, & Creemers, (2016) to investigate the extent to which teacher level factors included in the DMEE are associated with the grade six student achievement gains. The study was conducted in 73 schools of the upper East Region of Ghana. Student mathematics achievement gains were obtained by administering written tests to 4386 grade six students at the beginning and end of school year 2013–2014. Quality of teaching of the 99 grade six teachers were collected using the two observation instruments and the student questionnaire developed by Creemers & Kyriakides (2012) based on the DMEE. Multilevel analyses of the data, the study revealed the generalizability of the DMEE in the developing country context as the teacher level factors of the dynamic model and their measurement dimensions were found to have statistically significant effects on student mathematics achievement. Also the study stressed that the teacher factors of the DMEE were able to explain more variance in student mathematics achievement than the variance explained in the study conducted by Panayiotou et al., 2014 in the context of developed countries.

In summary, there is extensive empirical support for the DMEE, but most of the evidence is based on studies conducted in the European context. It is apparent that further studies conducted in other countries are needed to determine the robustness of the dynamic model of educational effectiveness (DMEE).

68

Methods to identify teacher effectiveness

A number of different strategies have been used to identify effective teachers. In practice, the most widely used measure has been classroom observation (Little, Goe &

Bell, 2009). However, other measures have included principal evaluations, student ratings of teacher performance, teacher self-reports (e.g., surveys, teaching logs and interviews) and analysis of classroom products such as student assignments and test scores (Goe, Bell & Little, 2008). More recently, researchers have used student academic outcomes (e.g., Ayres, Dinham & Sawyer 2004) and value-added measures

(e.g., Stronge, Ward & Grant, 2011) to identify effective teachers.

Classroom observation.

Classroom observation is the process where general or subject-specific teaching practices are measured by observing the full dynamic of classroom teaching. It can be conducted formally or informally (Goe et al., 2008). It is commonly used in schools and may provide significant information regarding teaching practices and behaviour useful for both formative and summative purposes. For example, Kane, Taylor, Tylor &

Wooten (2011) identified effective teachers in Cincinnati public schools, U.S.A using the Cincinnati Public Schools’ Teacher Evaluation System (TES) classroom observation instrument and reported a teacher’s score on the classroom observation components of

Cincinnati Public Schools TES reliably predicted differences in student mathematics and reading achievement.

Principal evaluation.

Principal evaluation may be structured or unstructured and is characterized by a wide variety of procedures. Most often this type of evaluation is conducted with beginning teachers for certification and tenure or with underperforming teachers who may be facing dismissal (Goe et al., 2008). Studies (Harris & Saas, 2007; Jacob &

69

Lefgren, 2008; Medley & Coker, 1987) which have compared principal evaluation of teachers to student academic achievement have reported mixed results. A number of researchers (Coker et al., 1980; Medley & Coker, 1987; Scriven, 1981; Stodolsky,

1984) have suggested principal evaluation is unlikely to be accurate particularly when training for principals in the use of this type of evaluation is unusual. In addition,

Peterson (2004) has argued limitations would be imposed by reporting systems which often rely on checklists and narrow anecdotal categories to record observations.

Student ratings of teacher behaviour.

Student ratings of teacher performance are mostly used at the tertiary level

(Cashin, 1999; Seldin, 1999). A number of studies (MET, 2010; Worrell & Kuterbach,

2001) have suggested student ratings of teacher performance are accurate and can provide reliable feedback on aspects of teaching practice that are predictive of student learning. For example, Worrell & Kuterbach (2001) examined two groups of academically talented high school student ratings of teacher attributes and program quality of a six-week summer program and found valid student ratings of teacher and program quality. In another study, Irving (2004) reported 70% accuracy in correctly classifying National Board Certified status of mathematics teachers in high school student ratings of mathematics teachers on five dimensions associated with National

Board Professional Teaching Standards in the United States. Despite this evidence, a number of authors (Goe et al., 2008) have suggested that this method may be susceptible to bias because student ratings may be based on other factors such as personal like or dislike of teachers.

Teacher self-report.

Teacher self-reports arise when teachers are requested to provide details on classroom teaching practices. This type of reporting may be conducted through

70

completion of surveys, instructional logs, and interviews where teachers are invited to express their own views and reflect on personal and organization related factors that may have influenced teaching performance. However, issues of reliability and validity have been raised with regard to the use of teacher self-reports and research findings tend to be mixed. For example, Porter, Kirst, Osthoff, Smithson & Scheneider (1993) used teacher completed surveys and reported teacher survey responses were consistent with other measures of teacher performance. While in contrast, Camburn & Barnes (2004) reported discrepancies between teacher responses in an instructional log and independent classroom observations of teacher behaviour.

Student academic outcomes.

When employed carefully and thoughtfully, student academic outcomes may contribute to judgments of effective teaching (Fenwick, 2001). For example, Ayres et al. (2004) investigated the teaching practices of 25 teachers they identified as effective teachers using student test scores in the New South Wales (Australia) Higher School

Certificate Examination. While this is a commonly used method for identifying effective teachers, Fenwick (2001) suggested student academic achievement is likely to be influenced by a complex interplay of factors particular to an institution, teaching context, and student disposition. Further, Peterson (2004) cautioned that student academic achievement data should only be used where researchers have access and certain the data is reliable and valid.

Value added measures.

A value added measure utilizes students’ previous test scores to create predicted test scores for a given year. The difference between the predicted and actual test scores are referred to as growth scores. The average student growth scores are used to determine the teacher contribution to student learning measured by the value added

71

score (Goe, 2008). Some value-added measures use student prior achievement scores in the calculation of growth scores, while other measures include student gender, ethnicity, socioeconomic status and teacher experience. Generally, a teacher is considered effective when students’ perform better than predicted based on previous test scores and, a teacher is considered ineffective when students’ perform worse than predicted based on previous test scores (Goe, 2008).

Although, Goe (2008) noted value added measures provided a relatively objective and inexpensive technique to estimate the proportion of variability in student learning attributable to teachers, and were also useful in determining which teacher characteristics matter for student learning, she also warned such measures provided limited insight into what teaching practices actually matter for student learning. Further, several authors (e.g., Braun, 2005; Glass, 2005; Kupermintz, 2003; McCaffrey, Koretz,

Lockwood & Hamilton, 2004) contended it is difficult to isolate individual teacher contributions from a range of other factors, such as classroom and school characteristics likely to influence student growth and experience. In addition, Goe (2008) cautioned that care needed to be taken when collecting and analyzing student data because incomplete data sets and small samples were likely to distort value-added measures.

A number of value-added measures have been developed. The most widely used value-added measures include the Dallas Value-Added Accountability System

(DVAAS), the Rate of Expected Academic Change (REACH), and the Education

Value-Added Assessment System (EVAAS) (McCaffrey, Lockwood, Koretz &

Hamilton, 2003). The DVAAS is a well-cited measure which has been employed by the

Dallas school system for a number of years (Webster & Mendro, 1997). This measure uses student-level characteristics to adjust test scores prior to analysis and shows the relationship between adjusted test scores in adjacent grades. The REACH measures

72

student growth specified goals and Doran & Izumi (2004) argued is a more constructive way to measure student academic gains. The EVAAS is the best known and most widely used approach (Ballou, Sanders & Wright, 2004; Braun, 2005). The EVAAS is highly parsimonious and uses all the test information available for a given cohort of students, and the identities of the school and teachers (Ballou et al., 2004; Braun, 2005).

The basic model of EVAAS is an equation that includes the history of student test scores to estimate the effectiveness of the teacher (Ballou et al., 2004; Braun, 2005).

The EVAAS has been criticized for not controlling for socioeconomic status (SES) and other background factors such as race and gender (Kupermintz, 2003; Linn, 2001).

However, research evidence (e.g., Ballou et al., 2004) has not supported these contentions.

Scholars had attributed the reasons for the increasing popularity in using value- added models to estimate the teacher and school effects on student learning and include:

VAM portrays the clearer picture of student learning (i.e., growth) as a growth-based system when compared with attainment-based accountability systems of Adequate

Yearly Progress (AYP) provisions (Braun, 2005; Murphy, 2012); VAM superior to the traditional measures (e.g. principal evaluation) of teacher effectiveness that are based on subjective evaluation (Weisberg, Sexton, Mulhern & Keeling , 2009). On the contrary, fundamental concerns/views were raised that may bedevil the measure of teacher and school effectiveness using VAM and include:

- The school systems, in practice, do not operate by randomization and students are not randomly assigned to the classrooms and teachers. Hence the random assigning of students to teachers for the purpose of measuring teacher effectiveness is rarely feasible

(or ethical) and effectiveness of the school and teacher is estimated under less than ideal conditions (Braun, 2005; Goldhaber & Anthony, 2004; Rothstein, 2009).

73

- Plausible risk of correct model specification in VAM as there is little consensus within the field about which model specification produces the most accurate value-added measures (Murphy, 2012). Different model specifications can lead to different teacher effectiveness rankings, meaning model choice can directly impact teachers’ ratings

(Briggs & Domingue, 2011; McCaffrey, Lockwood, Koretz Louis & Hamilton, 2004;)

- In VAM, the measure of effectiveness of teachers are norm referenced rather than criterion referenced, (Kupermintz, 2003; Murphy, 2012). Thus, the measure of effectiveness for a particular teacher only indicates where that teacher falls in comparison with other teachers in the system. If the teacher measured with a weak group of teachers would obtain a better value-added measure than measured with a strong group of teachers (Murphy, 2012).

Furthermore, investigating the methodological development of VAM for measuring the school and teacher effectiveness, researchers (Goldstein & Sammons,

1997; Rowe & Hill 1994; Hill & Goldstein 1998; Creemers, Kyriakides & Sammons,

2010; Thomas & Smees, 2000) suggested that student background factors need to be controlled for the value added estimations. For example, Thomas & Smees (2000) conducted a study in six regions: Lancashire, London, Jersey, Scotland, Netherlands, and England (as a whole) and the objective included establishing the optimal multilevel model(s) for measuring school effectiveness over time using a value added approach in a range of different pupil outcomes (academic and attitudinal). The study used the datasets of pupil academic and attitudinal outcomes, and results for different cohorts and curriculum stages relating to the six regions that included the explanatory variables: prior attainment (or attitude) data, background factors (e.g. entitlement to free school meals [FSM - a measure of low family income], ethnicity, gender and age) and school context (percentage of low attaining pupils - drawn from approximately the

74

bottom 25% ability band). Adopting a 'value added' approach as the methodology which separated and measured the school effect and that of other external factors (such pupil prior attainment and socio-economic status) on pupil performance, the authors noted the following:

‘To measure schools' value added' pupil attainment measures are needed at the beginning and end of each curriculum stage as well as other background data. A valid framework for secondary school evaluation in the UK needs to incorporate at least four underlying dimensions of effectiveness (in terms of different outcomes, pupil groups, cohorts and curriculum stages) and also needs to contextualise the results with regional information’ (p. 12).

Multiple measures.

Given mixed evidence with regard to individual measures of teacher effectiveness, a number of scholars (e.g., Ayres et al., 2004; Smith, Baker, Hattie & Bond, 2008;

Stronge, 2007; Wray, Medwell, Fox & Poulson, 2000) have recommended using multiple measures to determine teacher effectiveness, including test-based (e.g., student academic achievement scores) and non-test-based measures (e.g., classroom observations) in the identification of effective teachers. Coe, Aloisi, Higgins & Major

(2014) argued: “a single source of evidence may suggest the way forward, but when it is confirmed by another independent source it starts to become a credible guide” (p. 3).

This multiple-measure approach was adopted in the Measures of Effective

Teaching study (MET) conducted in the United States between 2009 and 2011 (MET,

2013). The goal of the project was to identify which measures gave the best and most accurate information about how well a teacher helps his or her students learn and how these measures should be used together to see the whole picture of a teacher's effectiveness (MET, 2013).

75

In the first phase (2009–2010), the MET project researchers investigated the effectiveness of 3019 teachers using a combined measure comprising value-added data based on student achievement gains, classroom observations, and student survey responses (MET, 2013). The value-added data was based on State achievement test scores in mathematics and English, and on supplemental mathematics and reading assessment scores to determine teacher contributions to student performance. Classroom observations were used to determine teaching practices of participant teachers, and student surveys assessed the extent to which the classroom environment was perceived to be engaging, supportive and ordered (MET, 2010). The MET investigators adopted a multiple measurement approaches because this provided greater predictive power, greater reliability the potential for diagnostic insight enabling teachers to improve their practice, and the potential to capitalise on strengths and offset single measure weaknesses. The MET investigators combined data collected from all three measures into a single composite score and based on this score classified each teacher as effective or ineffective.

In the second phase (2010-2011), the MET investigators proposed to check the accuracy of the combined measure for identifying effective teachers (MET, 2013). They designed a large-scale experiment to ascertain if teachers identified as more effective than their peers in phase one contributed to student achievement gains with random assignment to classes of randomly assigned students. The MET investigators reported students in classes with teachers identified as effective in phase one experienced greater learning gains than students in classes with teachers who were identified as less effective teachers in phase one. The MET investigators concluded that the combining the results from multiple measures of teacher effectiveness provided and accurate assessment of teacher effectiveness (MET, 2013).

76

Summary and main findings from the literature review

The consistent finding from teacher effectiveness research is, “classroom practices

(or teachers) are the most significant determinant of student learning and achievement”

(Muijs et al., 2014, p. 237). From the preceding review of the teacher effectiveness literature, it is evident that most investigations of teacher effectiveness have adopted an input-process-product approach, and there are findings from three streams of research which may be used to inform the development of a conceptual framework for this study.

The first stream of research focused on investigating the extent to which teacher characteristics (teacher qualifications, knowledge, and self-efficacy) contribute to student learning and achievement. Generally, mixed evidence has been reported.

The second stream of research focused on determining the extent to which specific teaching practices contribute to student learning and achievement. The findings reported in recent meta-analytical studies (e.g., Hattie, 2009; Kyriakides et al., 2013a) have supported findings (Brophy & Good, 1986; Muijs & Reynolds, 2000) which previously suggested the importance of opportunity to learn, structuring, feedback, questioning, practice and the classroom environment (Muijs et al., 2014) as characteristics of effective teaching practice.

The final stream of research focused on the best way to measure teacher effectiveness. This stream of research includes investigations which have focused on a variety of measures such as, principal evaluation, classroom observations, teacher self- report, student academic outcomes and value added measures. Generally, researchers

(MET, 2010) have recommended the use of multiple measures to determine teacher effectiveness.

Conceptual framework

The literature review of teacher effectiveness underlies the development of the

77

conceptual framework shown in Figure 3.3.

Input Process Product

Teacher Background Teacher Characteristics Behaviours

 Qualifications

 Experience  Orientation

 Structuring Student

Teacher Self-efficacy  Modelling Mathematics

for  Practice Achievement  Orientation  Questioning Scores  Structuring  Classroom  Modelling environment  Practice  Questioning  Classroom environment

Figure 3.3. Conceptual framework.

The conceptual framework in Figure 3.3 adopts the input-process-product framework used in previous investigations of teacher effectiveness. Inputs refer to teacher characteristics and include teacher qualifications, experience and self-efficacy for six teaching behaviours (Creemers & Kyriakides, 2008). Although evidence is mixed with regard to these teacher characteristics, the context of this study is

Bangladesh, where it is possible these characteristics may determine the extent to which teachers use the teaching behaviours identified in the DMEE. The processes in Figure

3.3 are teaching behaviours and include six of the eight teaching practices (orientation, structuring, modelling, practice, questioning, and class environment) reported by

Kyriakides & Creemers (2008) to be associated with effective teaching, student learning, and achievement. The products in the conceptual framework are student mathematics scores.

In summary, the conceptual framework (see Figure 3.3) proposed first, the relationship between teacher characteristics (i.e., academic qualifications, experience

78

and self-efficacy for mathematics teaching) and student mathematics achievement scores is mediated by teaching behaviours (i.e., orientation, structuring, modelling, practice, questioning and classroom environment). Next, the conceptual framework proposed teacher characteristics are directly related to teaching behaviours. Last, the framework proposed teaching behaviours is directly related to student mathematics achievement scores, but this was not investigated in the study.

Research questions

Four research questions emerged with respect to the relationships proposed in the conceptual framework, and were posed for investigation in the study.

1. What are the 20 highest performing secondary schools in mathematics within Dhaka

Metropolitan City (DMC), Bangladesh?

2. What teaching behaviours are demonstrated by mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC, Bangladesh?

3. Are teacher characteristics (i.e., experience, qualifications, and self-efficacy beliefs) related to teaching behaviours of mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC, Bangladesh?

4. What level of skill in teaching behaviours is demonstrated by mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC, Bangladesh?

Summary

This chapter has reviewed relevant aspects of the teacher effectiveness literature.

A conceptual framework was developed based on the literature review to guide the study. A number of research questions emerged from the conceptual framework and were posed for investigation in the study. The next chapter will describe the research methodology, and discuss the rationale and approach adopted in the study.

79

Chapter 4 Research Methodology

The purpose of this chapter is to provide an overview and theoretical rationale for the two-phased mixed methodology approach adopted in this investigation. This chapter is organised into two parts. First, the theory and rationale for the overall research approach adopted in the investigation is presented, and second, the theory and rationale for the methods utilised in phase one and phase two of the study are specifically described and discussed.

Research approach

Research is the systematic process of data collection, analysis, and interpretation of findings to understand a phenomenon (Leedy & Ormrod, 2005) within a particular research paradigm or approach, and is disseminated to be examined by others (Gliner,

Morgan & Leech, 2009). Generally, researchers will choose either a quantitative, qualitative, or a mixed method research approach in the conduct of research. The quantitative approach was defined by Creswell (2009) as “a means for testing objective theories by examining the relationship among variables. These variables, in turn, can be measured, typically on instruments, so that numbered data can be analysed using statistical procedures” (p. 4). The quantitative approach seeks to explain the phenomenon by collecting and converting data into a numerical form that is analysed using mathematically based methods (in particular statistics) (Aliaga & Gunderson,

2002). The qualitative approach “begins with assumptions, an interpretive/theoretical lens, and the study of research problems exploring the meaning individuals or groups ascribe to a social or human problem” (Creswell, 2014, pp. 64-65). Qualitative researchers collect data in the natural setting and typically analyse data inductively identifying patterns and themes (Creswell, 2014).

The mixed method approach incorporates methods for collecting or analysing data

80

from quantitative and qualitative approaches in a single study to investigate a research problem (Creswell, 2014; Johnson & Onwuegbuzie, 2004; Teddlie & Tashakkori,

2003). The mixed method approach is seen to be beneficial because it combines the strengths of the quantitative and qualitative approach and minimises the weaknesses that may stem from using a single approach (Johnson & Onwuegbuzie, 2004; Leech &

Onwuegbuzie, 2009). For example, the mixed method approach provides researchers with an opportunity to capture the details or nature of a situation, adding depth and context to quantitative results with a qualitative approach, and view problems from multiple perspectives (Klassen, Creswell, Clark, Smith, & Meissner, 2012). Further, this approach allows researchers to investigate complex research problems which may not be easily understood using a singular research approach (Klassen et al., 2012; Yin,

2009).

Creswell & Clark (2011) identified three mixed methods research approaches, sequential, concurrent and embedded. Sequential mixed methods use quantitative or qualitative methods separately, simultaneously or sequentially in several phases without mixing to investigate different research questions. The concurrent mixed method approach combines qualitative and qualitative methods concurrently to provide answers to the same research questions (Efron & Ravid, 2013). The embedded mixed method approach uses quantitative and qualitative methods in tandem, but one paradigm dominates the study. Thus, one method is nested within a larger method, which may be qualitative or quantitative (Efron & Ravid, 2013).

This study combined all three mixed methods approaches. Overall, the study adopted a sequential or two-phase approach (phase one and phase two) because it enabled investigation of separate research questions. In phase one quantitative methods were utilised, and in phase two a concurrent approach was used because research (Goe,

81

Holdheide & Miller, 2014; MET, 2013) has previously shown multiple measures are necessary to provide accurate and substantive findings with regard to teacher effectiveness. Further, in phase two qualitative methods were embedded or nested within the dominant quantitative approach adopted in the study.

Phase one methodology

The purpose of phase one was to identify top performing schools in mathematics within the Dhaka Metropolitan City (DMC), Bangladesh. In order to accomplish this purpose, a number of steps were undertaken including, adoption of a research design, selection of a population and sampling strategy, methods of data collection and data analysis procedures.

Research design.

A research design is the plan that guides an investigation and enables a researcher to provide answers to research problems (Kerlinger, 1986). There are two functions of a research design, the first is related to procedures and tasks required to complete an investigation, and the second focuses on the quality of these procedures (Kumar, 2014).

There are different types of research designs, and the type of research design selected by a researcher will largely be determined by the purpose of the study and the research questions investigated.

Research methods.

Research methods refer to actions to investigate the research questions. They include identification of population and sampling strategy, data collection methods and procedures, and data analysis techniques.

Population and sample selection.

The population refers to all members of a group who are of interest in a particular study (Gravetter & Wallnau, 2013). When the populations are very large, investigators

82

commonly select a smaller sample to representative of the population known as a sample (Gravetter & Wallnau, 2013). For example, in this study, the population of interest was the 5,867 secondary schools in the Dhaka region of Bangladesh. It was not possible to investigate every school in the population, and a smaller group of 380 secondary schools located in the DMC, Bangladesh were used in an attempt to represent the population of secondary schools in this region.

Broadly, there are three different sampling strategies, probability, non-probability and mixed sampling. Probability sampling involves random selection and ensuring an individual in a population has an equal and independent chance of selection. There are three types of probability sampling, simple, stratified and clustered. One of the main advantages of probability sampling strategies is that they can achieve a representative sample, but identifying and contacting all members of the sample can be difficult. Non- probability sampling strategies do not use random selection processes. Non-probability sampling strategies include convenience, accidental, expert, quota, snowballing, and purposive sampling. In convenience sampling, the easiest population members to access/contact are selected to obtain information. Accidental sampling is based on the process of convenience sampling, and the researcher continues collecting respondents until reaching the required number. In expert sampling, all the respondents are ‘known experts’ in the field of study, and in quota sampling the researcher finds and interviews a prescribed number of people in each of several categories of interest. Snowball sampling uses a network process that starts by selecting a few respondents, who are asked by the researcher to nominate other respondents, this process continues until the required number of respondents is reached. In purposive sampling, the researcher uses her/his judgment to select members of a population who are likely to be able to provide accurate information. Purposive sampling is extremely helpful to describe a

83

phenomenon about which only a little known (Kumar, 2011). While non-probability sampling is less time consuming and economical to use, it lacks objectivity in terms of sample selection. Mixed sampling strategies have the characteristics of both probability and non-probability sampling designs (Kumar, 2011).

A number of scholars (Miles & Huberman, 1994) have suggested that selection of sampling strategies should be determined by relevance to the conceptual framework and the research questions, ethical considerations, feasibility, richness of information about the phenomena of interest, provision for credible explanations and inference, and potential to enhance the generalizability of the findings to the population.

Adopting Miles & Huberman (1994) suggestions, the secondary schools in the

DMC in Bangladesh were purposively selected in phase one of this study because: (1) all the schools within the DMC were located in the capital city, and likely had similar characteristics with regard to school facilities (e.g., infrastructure, classroom furniture);

(2) as inhabitants of the capital city, students and teachers in those schools would be less hesitant to provide information, and (3) the investigator was able to access conveniently the schools which ensured efficient and timely data collection.

Data collection.

Data may be collected from primary or secondary sources (Kumar, 2011).

Data collected from primary sources is primary data that has been collected from firsthand experiences, and for the specific purpose of a study. This type of data may be collected through observation, interview, questionnaire or document analysis. Secondary source data is data collected for some other purpose, and not for the purpose of the study. There are different sources of secondary data including, government publications, (e.g., school academic performance reports), previous research, personal records and mass media (Kumar, 2011).

84

Based on the purpose of the study and suggestions in the teacher effectiveness literature (e.g., MET, 2013), secondary source data was used in the first phase of this study. More specifically, student test score data were obtained from the Board of

Intermediate and Secondary Education (BISE), Dhaka and analysed with value-added methods to determine the 20 highest performing secondary schools in mathematics in

DMC, Bangladesh. While researchers (Kumar, 2011) have suggested investigators should be wary of the availability, format, validity and reliability of secondary source the data, there were a number of reasons why this was not a concern with the secondary source data used in the first phase of this study.

Data analysis.

In phase one of this study, it was necessary to analyse a very large number of student test scores in mathematics (n = 380 schools) to determine the top 20 highly performing secondary schools in mathematics in DMC, Bangladesh. A number of scholars (Drury & Doran, 2003; Hershberg, Simon & Lea-Kruger, 2004; McCaffrey et al., 2003) suggested using value-added methods to analyse student and school academic performance data to estimate the direct effects of teacher and school factors on student achievement. Goe (2008) argued value-added methods (VAM) provide a relatively objective and inexpensive technique to estimate the proportion of variability in student learning attributable to teachers and schools. Based on the literature review, value- added methods, and specifically, the Education Value-Added Assessment System

(EVAAS) was used to estimate direct effects of schools and teachers on student academic performance in mathematics in the DMC, Bangladesh. An explanation of the

EVAAS model and estimation technique follows. According to Braun (2005) and

Ballou et al. (2004), the basic model of EVAAS is an equation,

푘 푘 푘 푘 푦푡 = 푏푡 + 푢푡 + 푒푡 …1(a)

85

푘 that expresses the score of a student (푦푡 ) at the end of a particular grade (k) in particular year (푡) as the sum of three components: district mean test score in year t,

푘 grade k = 푏푡 , teacher effect/contribution of the grade k teacher to the year t test score

푘 푘 = 푢푡 , and systematic and unsystematic variations or error in year t, grade k = 푒푡 . It is assumed that the teacher effect is the same for all the students in the class and attributable to the teacher of the class.

When the student moves in the next year (t+1) to the next grade (k+1), then the equation is,

푘+1 푘+1 푘 푘+1 푘+1 푦푡+1 = 푏푡+1 + 푢푡 + 푢푡+1 + 푒푡+1 …1(b)

푘+1 and has four components: district average for that grade and year = 푏푡+1 , teacher effect

푘+1 푘 for that year = 푢푡+1 , teacher effect from the previous year = 푢푡 , and systematic and

푘+1 unsystematic error of that year = 푒푡+1 .

It is assumed that the teacher effect for the previous year persists undiminished into the current year and the components of the unspecified variations in the two years are unrelated to each other. All variables in equations, 1(a) and 1(b) pertain to the same student and subject. Teacher effects are subscripted with years, and the model does not constrain teacher effectiveness to be constant over time. Thus, ‘teacher effects’ are actually teacher-within-year effects (Ballou et al., 2004). Finally, subtracting equation

1(a) from equation 1(b) and after rearranging, we find the teacher effect after one year,

푘+1 푘+1 푘 푘+1 푘 푘+1 푘 푢푡+1 = (푦푡+1 − 푦푡 ) − (푏푡+1 − 푏푡 ) − (푒푡+1 − 푒푡 )

Braun (2005) suggested to make this formulation intuitively plausible and attractive, the error terms can be ignored. Thus, the teacher effect after one year (e.g., t+1) is,

푘+1 푘+1 푘 푘+1 푘 푢푡+1 = (푦푡+1 − 푦푡 ) − (푏푡+1 − 푏푡 ) that is, the difference between the gain experienced by the student in that year and the average gain in the district for that same year.

86

Applying a similar method and assuming that in the consecutive two years the student is taught by the same teacher(s), and the gain in student’s scores and district

푘 average are measured in two tests scores (e.g. year t-test score = 푢푡 and year t+2 test

푘+2 score = 푢푡+2 ), teacher effects after two years are,

푘+2 푘+2 푘 푘+2 푘 푢푡+2 = (푦푡+2 − 푦푡 ) − (푏푡+2 − 푏푡 )

The EVAAS method of estimating direct effects of schools and teachers on student mathematics performance was used in the first phase of this study.

Phase two methodology

Within the context of the teaching effectiveness research, several studies (for example, Ayres et al., 2001; 2004) had selected the teachers from the identified top performing schools and reported the observed teaching practices as effective teaching for the considerable influence of the teaching in obtaining better student academic success than the success of the other schools. The study of Teddlie (1994) revealed that teachers of the effective schools exhibited significantly better classroom teaching behaviour than that of the teachers of the non-effective schools. Likewise, the present study in its second phase attempted to explore the teaching practices of a sample of mathematics teachers drawn from the 20 highly performing secondary schools in identified in phase one of the study. An important initial step towards accomplishing this purpose was the selection of an appropriate research design and research methods, which are discussed in the next section.

Research design.

Consistent with the overall mixed method approach adopted in this study, an embedded (qualitative methods embedded within the dominant quantitative methods), concurrent mixed method design (Efron & Ravid, 2013) was used in phase two of the study.

87

Quantitative research designs can be distinguished in several ways, two relevant to this study are, by the number of contacts with the study population and the nature of the investigation. Based on the number of contacts with the study population, quantitative research designs can be categorised as cross-sectional (one contact), before- and-after (two contacts), and longitudinal (more than two contacts). Cross-sectional designs aim to investigate the phenomenon, while before-and-after designs are used to determine change and longitudinal designs enable a researcher to explore patterns of change (Kumar, 2011).

Based on the nature of an investigation, a quantitative research design may be classified as experimental, i.e., where variables are manipulated by a researcher, or non- experimental i.e., where variables are not controlled by a researcher and causality may not be implied. There are three approaches in non-experimental research design, descriptive, associational, and comparative. The descriptive approach is used to investigate descriptive research questions and uses descriptive statistics such as means, percentages and frequency distributions to analyse data (Gliner et al., 2009). The associational approach is used to investigate the degree of association between variables of interest and may also be used to make predictions. A study using the association approach may use associational inferential statistics such as correlation coefficients and regression analysis to determine relationship strength and direction (Gliner et al., 2009).

The comparative approach may be used if an investigator wishes to compare existing attributes of participants within a group. Non-experimental studies using a comparative approach may use inferential statistics such as t-tests and ANOVA to analyse attribute differences in a group (Gliner et al., 2009)

In summary, a cross-sectional, associational research design that employed mixed methods was used to investigate the research questions in phase two of this study.

88

Research methods.

Research methods include actions to investigate research questions. They comprise the population and sampling strategy, methods of data collection and data analysis strategies.

Population and sample selection.

The theory underlying population and sampling strategy was discussed in pp. 76-

78. In phase two of the study, a two-step purposive sampling strategy was used. In the first step, a sample of mathematics teachers was drawn from the top twenty schools identified in phase one of the study. In the second step, teachers were selected from this sample based on several criteria: (1) they agreed to participate, (2) they had been at the school since 2010, and (3) they had taught year 9 and/or year 10 from 2010-2012. This two-step purposive sampling strategy enabled student test scores in mathematics from the Senior Secondary Certificate (2013) to be matched with teachers responsible for teaching the students who had attained the test scores.

Data collection and analysis.

The second phase of the study relied on several primary sources of data, namely, teachers and students, and multiple data collection methods, that is, observation, interviews, and questionnaires. Primary sources of data were used in phase two as this enabled the investigator to answer research questions directly based on up to date information, and use of multiple data collection methods increased reliability (Mertler,

2012).

Observations.

Observation is an efficient way to see what people do and hear what they say

(Robson, 2002). The information obtained from observation relates to what is currently occurring in a setting and provides direct “real-time” information on ongoing and

89

unfolding behaviour, processes, situations or events. It is the most appropriate method of data collection in cases where the primary interest is individuals or groups or when participants involved in interaction are unable to provide objective information (Kumar,

2011). Observations mainly rely on observational records of relatively natural behaviour. Differences are made between participant observation i.e., when the observer participates in the activities of the group, and non-participant observation, i.e., when the observer does not participate in the group activities, disclosed (people know what the observer is doing) and undisclosed observations, structured (coding is used) and unstructured observations, and controlled (often in a laboratory) and natural observations (in a natural setting) (Coolican, 2009).

Conducting observations.

Conducting observations have several advantages. First, the researcher can gather data on actual behaviour, rather than ask participants to report behaviour. Second, it enables a researcher to see behaviours that participants may not be able to report themselves. However, conducting observations have some disadvantages such as the

Hawthorne effect, where participants behave differently because they know they are being observed. A researcher may have to wait some time before observing the desired behaviour (Mertler, 2012). Following careful consideration of these advantages and disadvantages, observations were conducted in phase two of this study, as it was unlikely that teacher reports elicited through interviews or questionnaires would capture all aspects of the actual teaching behaviours in classrooms.

Recording observations.

Important decisions about the recording of observations will be determined by the purpose of a study. These decisions are related to the type of observation, devices to use in recording and sampling techniques to be used for observations. Observations may be

90

qualitative or quantitative, open or close ended, and structured or unstructured.

Qualitative observations are usually open-ended and may be semi-structured, (i.e., start with a particular issue of interest), or unstructured, (i.e., based on a given agenda).

Qualitative observation techniques usually involve descriptive and narrative recording of events and behaviour (Coolican, 2009). In contrast, quantitative observations are close ended (i.e., use tally sheets, checklist, rating scales), and are structured (i.e., categories of behaviour have been determined in advance.

Records of observations may be made using a number of devices, for example, visual recording, still camera, audio or hand written notes, ratings or coding on the spot

(Coolican, 2009). The advantage of visual recording is that it allows an observer to view and analyse observations several times before coding, interpreting or drawing conclusions. In addition, it can be viewed by others, limiting observer bias and increasing the reliability of observations, interpretations, and conclusions (Kumar,

2011). Observations may be recorded using event sampling or time sampling techniques. Event sampling consists of the observer recording an event (or behaviour) every time it happens and time sampling is when the observer decides on a time interval and then records the event occurring at that time (Coolican, 2009).

Validity and reliability of observations.

A significant challenge in conducting observations is the problem of observer bias. Coolican (2009) and Kumar (2011) recommended a three-step approach to overcoming observer bias and increasing the reliability of observations. First, use different observers, and if there is good agreement between them, at least consistency may be assumed. Second, increase reliability by specifying in advance precisely what behaviour belongs to each category, and train observers to a high degree of reliability and accuracy to categorise behaviour accordingly. The last step requires the checking of

91

inter-observer agreement using a statistical measure such as, Cohen’s kappa to determine accuracy and reliability of observations. Cohen’s kappa ranges from -1.0 to

1.0, where large values indicate high agreement and low values indicate less agreement than that which could be attributable to chance. Generally, kappa values between .4 to .6 are considered ‘fair’, .6 to .75 ‘good’, and above this are considered ‘excellent’ (Elliott

& Woodward, 2007).

Applying recommendations from the literature to observations in this study.

As the study was based on a pre-existing framework (the DMEE), time sampling and a digital video recording device enabled the researcher to capture accurate records of events, and replay the recording as needed during the analysis, increasing validity and reliability of interpretation and conclusions based on the analysis of observations.

Two pre-existing observation instruments were used to analyse the observations.

The first observation instrument (LIO1) was based on Flanders (1970) system of interaction analysis to measure quantitative and qualitative characteristics of the teaching factors; the classroom as a learning environment in the DMEE). The second observation instrument (LIO2) was designed to measure the quantitative and qualitative characteristics of the other five teaching factors in the DMEE; orientation, structuring, modelling, questioning and application (practice) associated with student academic achievement The LIO1 and LIO2 have not been included due to copyright reasons.

The recommendations of several scholars (Coolican, 2009; Kumar, 2011) for minimising observer bias and maximising the reliability of observations were adopted in this study. First, the observations were visually recorded using a small digital video recorder. Second, the researcher trained another observer to code teaching behaviours using the coding instruments (LIO1 and LIO2) designed to measure the teaching behaviours in the DMEE (Creemers & Kyriakides, 2008, 2012). Third, the researcher

92

checked inter-observer reliability with Cohen’s kappa statistic.

Analysis of data generated from observations.

Recording of observations using coding instruments such as the LIO1 and LIO2

(Creemers & Kyriakides, 2008, 2012) generated data measured at the nominal level.

Nominal data consist of three or more unordered categories, and initial data processing involves assigning a number to each category to represent the category name, but there is no implied order or value (Morgan, Leech, Gloeckner & Barrett, 2012). Although limited statistical procedures are available to analyse nominal data, statisticians have developed some procedures to summarise, analyse and make inferences from this type of data, including the use of descriptive statistics, contingency tables and non- parametric statistics (Elliott & Woodward, 2007).

Preliminary analysis

Summarising data is an important preliminary step in data analysis, as it helps the investigator to understand her/his data, check for errors and ascertain if data meets assumptions for statistics that will be used in further data analysis (Morgan et al., 2012).

The most common way to summarise nominal data is with descriptive statistics and graphs. Summaries of nominal data are typically given as counts (number of participants in each category) and/or as percentages (Elliott & Woodward, 2007).

Several different descriptive statistics may be used to summarise nominal data such as frequency tables and bar charts, contingency tables, the mode and number of categories and counts in each category (Morgan et al., 2012). Frequency tables show counts (how many participants in each category) and bar charts display this information as percentages. Contingency tables are frequency tables that display two nominal variables at the same time. They may be used by an investigator to describe or determine relationships. The mode shows the most frequently occurring category and indicates a

93

central tendency, and the number of categories and frequency counts or percentages in each category may be used to show variability. Additionally, the maximum and minimum frequency may provide information about the distribution of categories

(Morgan et al., 2012).

Use of nonparametric statistical tests.

Nominal data are analysed with nonparametric statistical tests (which are less powerful than parametric tests) because assumptions of parametric tests are violated

(Morgan et al., 2012). It is acknowledged that a number of nonparametric tests may be used to investigate relationships between nominal variables. However, the following review is limited to nonparametric tests relevant to the purpose of this study, that is, chi- square tests (investigate relationships), phi or Cramer’s V (determine the strength of a relationship), and Cohen’s kappa (assess inter-observer reliability) (Morgan et al.,

2012).

Chi-square contingency table analysis is commonly used to analyse the association between two nominal variables. For example, “one nominal variable may have r possible response categories and another may have c possible response categories. Thus, there are, r x c possible combination of responses for these two variables. The r x c contingency table, has r rows and c columns consisting of r x c cells, which contain the observed counts for each of the r x c combinations” (Elliott &

Woodward, 2007, p. 114). A chi-square (χ2) statistic is computed to compare observed counts with expected counts. If the value of the χ2statistic for observed counts is greater than what might be expected, the model of no effects is rejected. The contingency table is significant and interpretation of cell frequencies is warranted. Thus, observed and expected cell frequencies are examined to explain findings. If the χ2 statistic for observed counts is less than what might be expected, the model of no effects cannot be

94

rejected and the contingency table is not significant, interpretation of cell counts is not warranted as values may have been obtained by chance.

Chi-square contingency table analysis is distinguished by two sampling strategies, a test of independence and test of homogeneity. Tests of independence are used if there is interest in determining the association between two nominal variables (Elliott &

Woodward, 2007). Tests of homogeneity are used if responses to a nominal variable have been collected separately from two or more populations (Elliott & Woodward,

2007).

While the sampling strategy does not affect how contingency tables are set up, it does affect the hypothesis. For example, the null hypothesis for a test of independence would be, H0: There is no association between the two variables, and for a test of homogeneity would be, H0: The distribution of the nominal variables is the same across the populations (Elliott & Woodward, 2007).

The use of chi-square tests is limited in several ways. First, observations must be unique to each cell. If a participant appears in more than one cell, then assumptions are violated and findings may be invalid. Second, only frequencies may appear in cells.

Chi-square cannot be computed if cell contents are means, percentages, proportions or ratios. Third, chi-square is dependent on relatively large samples. A rule of thumb suggested by Cochran (1954) says that chi-square is adequate if no expected cell frequencies are less than one, and no more than 20% fall below five. Camilli & Hopkins

(1978) argued Cochran’s rule of thumb (1954) led to Type 2 errors and suggested chi- square tests were accurate, even if expected frequencies in one or two cells were lower than five, as long sample size was more than 20. Even so, if cell frequencies are low, common practice is to use Fisher’s exact test (Coolican, 2009). Fisher’s exact test computes probability from frequencies. It provides similar information to the chi-square

95

test of the statistical significance of relationships, but both tests (chi-square and Fisher’s exact test) do not provide information about the strength of the relationship (Morgan et al., 2012).

Phi and Cramer’s V coefficients provide information about the strength a relationship between two nominal variables (Frey, 2016). The phi coefficient (φ) is used with variables which have only two categories, and Cramer’s V is used with variables which have more than two categories. Both Phi and Cramer’s V are measures of relationship strength and are part of the ‘r’ family of effect sizes. Phi coefficients range from -1.00 to 1.00, with those further from zero indicating stronger the relationships.

Cramer’s V coefficients range from 0 to 1, stronger relationships are indicated by values closer to one (Frey, 2016). Phi and Cramer’s V may be difficult to interpret under some conditions as the maximum value possible is considerably less than one (Morgan et al.,

2012).

An important consideration in the recording of observations is related to inter- observer reliability or the agreement between two observers on the coding of observations. Cohen’s kappa coefficient is used as a measure of inter-observer agreement. Cohen’s kappa coefficients range from 0 to 1, the closer kappa is to 1, the more agreement among observers (Frey, 2016). Most researchers prefer kappa coefficients to be 0.6 and higher than 0.7 before declaring a good level of agreement

(Elliott & Woodward, 2007).

Interviews.

Interviews are an interchange of views about a topic of mutual interest between two or more people (Kvale, 1996). Interviews are typically planned and include questions asked by an interviewer to elicit information from an interviewee about a phenomenon of interest (Cohen, Manion & Morrison, 2011).

96

While researchers (Bogden & Biklen, 1992; Lincoln & Guba, 1985; Oppenheim,

1992; Patton, 1980) use a number of different factors to make a distinction between interviews, the most important one is the degree of structure. Best conceptualised on a continuum, the degree of structure is reflective of interview purpose (Cohen et al.,

2011). Thus, structured interviews are useful when the investigator is aware of what is not known and able to develop questions to elicit this information, and unstructured interviews are helpful when an investigator is unaware and needs to hear it from the participant (Cohen, Manion & Morrison, 2007). Further, the more the investigator wants to generalise to the population, the more standardised and quantitative the interview becomes, and the more the investigator wishes to acquire unique personalised information the more the interview is open-ended, unstandardized and qualitative

(Cohen et al., 2007).

At one end of the continuum, the structured interview is a closed situation where content and procedures are set in advance. There is little scope for the interviewer to make modifications to previously determined content, sequence and wording of questions (Cohen et al., 2011). At the other end of the continuum, the unstructured interview is an open situation with the research purpose shaping questions asked, and content, sequencing and wording determined by the interviewer (Cohen et al., 2011). In this study, semi-structured interviews were used and this enabled the researcher to modify content or probe more deeply, ensuring respondents provided information to address research questions.

Planning and conducting interviews

Kvale (1996) suggested seven steps for planning and conducting interviews which include: thematizing, designing, interviewing, transcribing, analysing, verifying and reporting. Thematizing involves clarification and translation of the purpose into more

97

specific concrete research objectives which can be operationalised in interviews (Kvale,

1996). Designing involves operationalizing research objectives into interview questions that will comprise the interview schedule (Kvale, 1996). The design of the interview schedule is critical as it provides the interview structure, guidance for the interview process, and most importantly the method of data collection that will determine the quality of a study (Creswell, 2014). Generally, interview schedules comprise open- ended or closed questions. Open-ended questions allow respondents to freely express opinions, attitudes or perceptions, providing in-depth information helpful when investigating complex issues (Cohen et al., 2011). The use of open-ended questions in interviews is advantageous in a number of ways: they are flexible, they enable the interviewer to probe for in-depth information or clarify questions and test interviewee knowledge and clear misunderstanding, and assist in building rapport and cooperation of the interviewee (Cohen et al., 2011). In contrast, closed questions allow respondents to choose from a pre-specified set of alternatives. The main advantage of closed questions is that information generated can be easily coded and measurement equivalence which increases reliability (Cohen et al., 2011). A disadvantage of closed questions is the generation of superficial data, and inaccuracy created if respondents are unable to indicate the most appropriate response from the alternatives (Coolican, 2009).

An important part in the design of an interview schedule is the construction, format, categories and sequencing of questions. Cohen et al. (2011) suggested questions be carefully considered in relation to the objectives of interviews, the nature of the content, specificity, depth and type of information sought, characteristics of interviewees, and type of interaction sought during the interview.

Researchers have proposed guidelines for questions. Arksey & Knight (1999) proposed consideration with regard to the use of simple vocabulary, avoiding

98

prejudicial language, ambiguity, double-barrelled questions and assumption based questions, and decisions about whether to use leading, hypothetical and sensitive questions. Tuckman (1972) noted interviewers may adopt four different formats of questions: direct, indirect, general and specific. Further, Tuckman (1972) suggested the direct and specific format of a question is likely to make interviewees more cautious, giving less honest answers, while the indirect and general format of a question is likely to result in more open, honest and frank responses.

Distinctions made between factual and opinion or attitude questions, have facilitated guidelines (Patton, 1980) with respect to the focus of questions, such as demographic, descriptive, experience, behaviour, knowledge and contrast questions.

Kvale (1996) added process questions which introduce a topic, follow up a topic, probe for further information, ask for examples, directly or indirectly request information, and interpret interviewee responses.

In addition to interview questions, interview schedules commonly include prompts and probes. Prompts enable an interviewer to clarify questions or interviewee misunderstanding, while probes request more in-depth information (Cohen et al., 2011).

Two types of probes are often used, one to gain more detailed information and one to obtain more elaborate responses (Aldridge & Levine, 2001). However, Fowler (2009) contended too many prompts and probes increased bias into the interview. In response to Fowler’s contention, Cohen et al. (2011) suggested sequencing the interview schedule, so that as the topic was discussed, specific questions were asked about each topic, the issues within each topic were discussed as well as questions for each issue, as well as a series of prompts and probes for each topic issue and question.

Interviewing procedures can be grouped according to four stages: before, starting, during and ending interviews. Before the interview, an investigator should determine

99

the purpose of the interview, the sample, the type of interview (that is, one-to-one, focus group or telephone), the setting and how the interview will be recorded (Creswell,

2014).

Creswell (2014) suggested at the start of interviews, investigators should establish rapport with the interviewee, obtain informed consent, brief the interviewee on the purpose of the research, what to expect during the interview and duration, explain how responses will be recorded, analysed and how the results will be used. Cohen et al.,

(2011) labelled this part of interviewing, explaining the ‘rules of the game’ so that interviewees are left in no doubt as to what will happen during and after the interview.

They added several caveats to Creswell’s (2014), such as the interviewer introducing her/himself and setting the scene for the interviewee by indicating there are no right or wrong answers, some topics may be deep but are not designed to test, invite questions and obtain permission for recording.

During the interview, Cohen et al. (2011) advised interviewers to be mindful that interviews were not merely data collection exercises, but social encounters. Interviewers should establish an atmosphere with respect to the cognitive, ethical, interpersonal, interactional, communicative and emotional features of interviews so that interviewees feel able to discuss issues openly and honestly, (Cohen et al., 2011). Thus, to address cognitive features, interviewers need sufficient content knowledge to maintain interviewee confidence or discern false information (Cohen et al., 2011), and to deal with ethical aspects, interviewers should consider issues with regard to informed consent, confidentiality and consequences of interviews (Cohen et al., 2011). Attention to interpersonal, interactional, communicative and emotional characteristics require interviewers to be adept at active listening, able to establish and maintain rapport and keep a conversation going, motivate interviewees to share information and develop

100

strategies to overcome power asymmetries (Cohen et al., 2011). Further, scholars

(Patton, 1980) have emphasised clarity of questioning, sequencing and framing of interview questions, and others (Field & Morse, 1989) have noted several problems in the conduct of interviews which may be avoided, such as interruptions and distractions, asking awkward or sensitive questions, giving advice and ending interviews too soon

(Cohen et al., 2011).

Kvale (2007) suggested interviews may be ended with debriefing participants, giving them a chance to add information or ask questions and providing an opportunity for the interviewer to gain feedback about the participants’ interview experience. A further step may involve the interviewer summarising the main points learned to gain feedback from the participant. After this, an interviewer may close by indicating s/he has no further questions and ascertain if the participant has anything else to add, providing an opportunity to deal with any issues that may have concerned the participant during the interview.

Transcribing is an important part of interviewing because there is potential for data loss or data distortion (Cohen et al., 2011). Transcriptions are problematic because they filter out visual and non-verbal aspects of interviews, and as such record data only

(Cohen et al., 2011). Thus, transcriptions may not be regarded as complete records of interviews. Cohen et al. (2011) suggested investigators should ensure that different kinds of data are recorded in interviews such as, voice tone and inflection, pauses, silences and emphases, interruptions, the mood and pace of talk to capture other information that is important.

Validity and reliability of interview data

Verifying interviews involve establishing validity and reliability with regard to the interview itself and interview data. Validity and reliability concerns largely stem from

101

bias created by the characteristics of the interviewer (e.g., attitudes, expectations, assumptions and misperceptions), interviewee (e.g., misunderstanding of what is being asked), and content of questions. Clearly, a cluster of problems creates bias in interviews, which need to be minimised as much as possible (Cohen et al., 2011). Kvale

(2007) suggested a number of strategies that investigators could use to minimise bias in interviews including: by being knowledgeable about the subject of the interview, structuring the interview well, clearly defining terms, allowing participants to time to answer, being sensitive and empathic, being aware of significant parts on the interview for participants, keeping to the point, checking reliability, validity and consistency of responses to well-placed questions, being able to recall earlier comments and being ready to clarify, confirm or modify participant comments.

Issues of validity and reliability extend to interview data. Three strategies recommended by scholars (Denzin, 1997; Kumar, 2011; Lincoln & Guba, 1985) to address these issues are member checking, audit trails, and triangulation.

Member checking involves getting participants to review investigator written accounts and interpretations of interviews as a check of accuracy and completeness

(Creswell, 2012). It addresses issues of dependability. Audit trails involve identifying acceptable processes for conducting a study so that results are consistent with data

(Cohen et al., 2011). Audit trails enable an investigator to address confirmability issues.

Triangulation involves using two or more methods of data collection to investigate particular phenomena in one study, and is used to show concurrent validity

(Cohen et al., 2011). Triangulation is consistent with a mixed methods approach and has been argued to overcome the problem of ‘method boundedness’ (Gorad & Taylor,

2004). Denzin (1997) extended the idea of triangulation into six categories (time, space, combined, investigator, data and methodological), but in education, time (e.g.,

102

longitudinal and cross-sectional studies), space (e.g., different schools), investigator

(e.g., two observers) and methodological (e.g., observations and interviews) triangulation are mostly used (Cohen et al., 2011). Although criticised by a number of scholars (Denzin, 1997; Lincoln & Guba, 1985; Silverman, 1985) triangulation techniques were used in the second phase of this study because the complexity of the phenomena under investigation, that is teaching practices may not be fully captured, if investigated from one viewpoint (Cohen et al., 2011).

Analysis of interview data

According to Miles, Huberman & Saldana (2014), qualitative data analysis involves three concurrent activities, data condensation, data display, drawing conclusions and verification. Data condensation refers to selection, processing and simplifying of interview transcripts. A data display is an organised, compressed assembly of information, which enables analysis and the drawing of conclusions (Miles et al., 2014). Drawing conclusions starts at the beginning of the data collection as the investigator codes, notes patterns, develops explanations and propositions, and verification occurs as meaning begins to emerge from data and involves testing emerging findings for plausibility, robustness, and confirmability (Miles et al., 2014).

The first step in data condensation is to process handwritten field notes, and audio or video recordings to enable an investigator to read, check accuracy, code and analyse data (Miles et al., 2014). Interview data is usually transcribed into text, which has been condensed and simplified from interview events (Miles et al., 2014).

The next step encompasses first and second cycle coding to derive themes, make assertions and develop propositions (Miles et al., 2014). First cycle coding involves assigning codes or labels to chunks of data. Although, there are many different approaches to coding, the most commonly used are descriptive, In Vivo and process

103

coding (Miles et al., 2014). Descriptive coding assigns a word or phrase to summarise the meaning of a chunk of data. In Vivo coding uses the words or phrases of participants as codes, and process coding uses ‘ing’ words to indicate action in data (Miles et al.,

2014). In first cycle coding, codes may be developed deductively - based on the conceptual framework, research questions, hypothesis, or key variables of interest, or developed inductively as meaning emerges from the data (Miles et al., 2014). Initial codes may change as data analysis progresses. However, it is important codes are conceptually and structurally coherent and relevant to the purpose of a study (Miles et al., 2014). Second cycle coding or pattern coding involves grouping the first cycle codes into smaller categories or themes. Pattern codes are explanatory or inferential codes that identify emergent themes and are a type of meta-code (Miles et al., 2014). Pattern codes may comprise categories or themes, causes or explanations, relationships between people or theoretical constructs (Miles et al., 2014). The pattern codes may be further analysed using narrative descriptions or matrix displays. Narrative descriptions are written elaborations of pattern codes that are supported by data collected from interviews. Matrix displays present information systematically in a visual format such as a chart or tables for further analysis and enabling the investigator to draw and verify conclusions (Miles et al., 2014). Matrix displays may include different types and levels of data, for example, direct quotes, codes, categories, themes, summaries, researcher explanations or summarised judgements (Miles et al., 2014).

Miles et al. (2014) suggested displays are helpful if they contribute to reliable understanding. They recommended a number of specific tactics for drawing and verifying conclusions from data displays, beginning with a quick scan to see if anything stands out. The first conclusions can then be drawn using multiple tactics such as pattern noting, identifying themes, comparing and contrasting, clustering and counting

104

data (Miles et al., 2014). Further, Miles et al. (2014) proposed adding explanatory text to make conclusions explicit and checking early conclusions with transcripts or written field notes to ensure accuracy. Additionally, conclusions need to be confirmed and the most commonly used tactics include following up surprises, triangulating, making if- then tests and checking out rival explanations (Miles et al., 2014). Conclusions about data displays may be summarised in analytical narratives, where display date is analysed and linked to existing theory and research enabling conclusions to be drawn

(Miles et al., 2014).

Applying recommendations from the literature to interviews in this study.

The guidelines for the planning and conducting interviews and the validity and reliability of interviews data discussed in earlier section were applied to conduct the teacher interviews including the development of the interview schedule of the study.

The interview schedule was designed to elicit in-depth information about teaching behaviours noted in the DMEE. The interview schedule comprised 19 open ended questions on five teaching behaviours; orientation, structuring, modelling, questioning, and practice (Appendix B4). The interview schedule was piloted by the researcher before conducting the interviews with typical respondents and experts to obtain feedback for modification/development.

The guidelines suggested by scholars (e.g., Cohen et al., 2011; Creswell, 2014;

Kvale, 2007) were applied to the interview process. Each interview began with a brief explanation of the purpose, confidentiality, interview procedures, formal consent to conduct and digitally record the interview, and the interview questions (Appendix B4) developed for the study. Digital recording of interviews facilitated accurate transcription of interviews following the interviews. The qualitative data analysis techniques recommended by Miles et al. (2014) were used to analyse the interview data.

105

Questionnaires

Surveys are a common and efficient way to collect information from a relatively large number of people (Coolican, 2009). Often questionnaires will be used to gather survey data. Questionnaires comprise questions or set of statements that require a written response from research participants, and these responses are collected and collated by the investigator.

Planning and designing questionnaires.

Planning a questionnaire involves clarification of the research purpose and translation into specific objectives that can be operationalised into measures of key concepts which will elicit appropriate information (Cohen et al., 2011). An important planning decision is to ascertain if the research objectives can be achieved with the use of an existing questionnaire or if one needs to be developed for a study. Creswell (2002) advocated the use of existing questionnaires if compatibility with research objectives was determined as validity and reliability would have been established.

Much of the earlier discussion on the design of interviews also pertains to the design of questionnaires. This is particularly the case for construction, format, and categories of questions. Thus, these issues are not discussed here, instead, the discussion focuses on two issues specifically related to questionnaires, the sequencing of questions and layout of the questionnaire. Often questionnaires begin with simple, factual type questions that are easy to answer e.g., questions about age, sex or experience, next they change to closed questions to obtain respondent attitudes, beliefs, and perceptions, and last shift to more open-ended questions to elicit opinions, attitudes, beliefs and perception and reasons for them (Cohen et al., 2011). Sequencing of questions is important to maximise respondent cooperation (Openheim, 1992).

Additionally, Cohen et al. (2011) suggested easy, attractive and interesting

106

questionnaires are more likely to elicit higher levels of respondent cooperation. Dilman,

Carley-Baxter & Jackson (1999) reported most respondents expect to read a question and a response make a mark and move onto the next question, but many questionnaires were more complicated. Thus, clarity of wording and simplicity of design is critical in the design of questionnaires (Cohen et al., 2011).

Validity and reliability of questionnaires.

Validity and reliability is a requirement of any research. When validity is applied to questionnaires it refers to whether the questionnaire measures what it set out to measure (Kumar, 2011). Generally, the validity of a questionnaire may be established logically (by justifying each questionnaire item with reference to research objectives), or using statistical procedures (e.g., correlation coefficients) (Kumar, 2011) to support these links. Three sources of evidence for the validity of questionnaires are relevant: content, concurrent, and construct validity. Content validity refers to the extent to which an item includes all aspects of a concept it purports to measure (Kumar, 2011).

Concurrent validity refers to the extent to which the results of a questionnaire are consistent with the previous measurement of the same construct/s (Kumar, 2011).

Construct validity refers to the extent to which a questionnaire measures the construct/s it is supposed to measure (Kumar, 2011). Construct validity is usually determined by statistical procedures such as the Pearson correlation coefficient.

When reliability is applied to questionnaires it refers to the “extent to which repeat measurements made under the same conditions will give the same result” (Moser

& Kalton, 1989, p. 353). A questionnaire is said to be reliable if it is internally consistent (the tendency for people to score at same strength on similar items) and stable (produces same results when tested on different respondents). Internal consistency procedures include split half techniques where items intended to measure

107

the same variable are split into half, scores obtained from each half are correlated and the stepped up reliability computed for the whole instrument, and also the computation of Cronbach’s alpha (Coolican, 2009).

Translating questionnaires.

Many questionnaires have been developed and written in English. When research is conducted in a non-English speaking context, it may be necessary to translate questionnaires into the native language of research participants. Several translation methods may be used including, direct and back translation and mixed method techniques (Usunier, 1998). However, back translation techniques were used as it has been the most commonly adopted technique in research (Douglas & Craig, 2007). The procedure of ‘back-translation’ involves two translators, a native speaker of the source language, and a native speaker of the target language. One translator translates the source language (English) into a target language (e.g., Bangla), and the other translator translates it back into the source language, and finally, both versions are compared.

Piloting questionnaires.

The wording of questions is critical to the validity and reliability of the questionnaire and contributes to the quality of a study. Thus, piloting of questionnaires has been recommended by several researchers (Coolican, 2009; Kumar, 2011;

Morrison, 1993; Oppenheim, 1992). It should be undertaken with a smaller number of respondents and experts who can provide feedback on all aspects of the questionnaire

(Coolican, 2009).

Applying recommendations from the literature to questionnaires in this study.

The guidelines for the development and design of questionnaires discussed previously were applied to the development of a teacher and student questionnaire in this study. The teacher questionnaire comprised two parts. The first part was designed to

108

collect information on teacher demographic characteristics. Although evidence is weak with regard to teacher characteristics (Creemers & Kyriakides, 2015), these questions were included because it was thought teacher characteristics may be important determinants of teaching behaviours within the Bangladesh context. Thus, teacher characteristics were measured with closed type questions such as, ‘what is your age?’ and ‘are you male or female?’ and open-ended questions such as, ‘how many years have you been teaching mathematics at secondary level?’ and ‘what is your highest degree or the level of school you completed?’ (see Appendix B1). The second part of the teacher questionnaire was designed to collect information on teacher self-efficacy related to eight teaching behaviours (i.e., orientation, structuring, modelling, practice, questioning and classroom environment) in the dynamic model of educational effectiveness

(DMEE) (Creemers & Kyriakides, 2008). This part of the questionnaire required teachers to rate ‘how confident’ they were on 25 statements related to the eight teaching behaviours. The 25 statements were either modified from the long form of the

“Teachers’ Sense of Efficacy Scale (TSES)” developed by Tschannen-Moran & Hoy

(2001) or developed to measure self-efficacy for the eight specific teaching behaviours of interest in the study. The statements from the TSES scale (Tschannen-Moran & Hoy,

2001) were modified and used because the validity and reliability of the scale had been confirmed in diverse settings (Knobloch & Whittington, 2002; Poulou, 2007;

Tschannen-Moran & Hoy, 2007; Wolters & Daugherty, 2007), and also confirmed in cross- national settings, for example, Cyprus, Canada, U.S.A., Korea, and Singapore

(Klassen et al., 2009). In addition, the content of some items in the TSES scale were consistent with some of the specific teaching behaviours of interest in this study. For example, item 3, ‘How much can you do to control disruptive behaviour in the classroom?’ and item 13, ‘How much can you do to get children to follow classroom

109

rules?’ was consistent with the teaching behaviour, classroom environment

(misbehaviour).

A total of 18 items were modified from the TSES scale and included in the second part of the teacher questionnaire (see Appendix B2). The modifications were made to make items relevant to Bangladesh, the context of the study. Five items from the TSES scale were excluded from the teacher questionnaire because they were not relevant to the Bangladesh context. Seven new items were developed to measure self-efficacy for five of the specific teaching behaviours of interest in this study (orientation, structuring, classroom environment [cooperation, competition, and group work], and practice and time management) (see Appendix B2).

Further, the stems of the 18 modified items from the TSES scale, that is, ‘how much’, ‘how well’ and ‘to what extent’ were replaced with ‘I am confident that I can’ in the teacher questionnaire used in this study. Additionally, the nine-point Likert rating scale was replaced with an 11-point rating scale, which ranged from 0% (no confidence) to 100% (complete confidence). This 11-point rating scale has been recommended by

Bandura (1997) for use self-efficacy instruments.

The student questionnaire developed by Creemers & Kyriakides (2008, 2012) was designed to collect information from students about the eight specific teaching behaviours. Research evidence (Carle, 2009; Kyriakides, 2005) has suggested student ratings provide reliable information on teacher behaviour and the classroom environment. The validity and reliability of this questionnaire has been confirmed in a number of studies (e.g., Creemers & Kyriakides, 2008; Kyriakides et al., 2009;

Panayiotou et al., 2014). The questionnaire completed by students in this study, comprised 48 of the 49 items from part A of the student questionnaire developed by

Creemers & Kyriakides (2008, 2012). The main modifications to this questionnaire

110

were the exclusion of one item (48) and part B because it was not relevant to the

Bangladesh context. The five-point Likert rating scale, suggested by Creemers &

Kyriakides (2008, 2012) was used by students to rate to what extent the teacher displayed certain teaching behaviours. The scale used ranged from ‘1’ (never) to ‘5’

(almost always).

Following the development of the teacher and student questionnaire the investigator used the back-translation technique to translate the questionnaires into

Bangla, and pre-tested them to ensure lexical and idiomatic equivalence (Usunier,

1998). Additionally, further piloting of the questionnaires was undertaken with typical respondents and experts to obtain feedback.

Administering questionnaires.

Questionnaires can be administered in a number of ways, including: self- administration, post, one-to-one interview, telephone or internet (Cohen et al., 2011).

The discussion is restricted to self-administered questionnaires, which may be completed in the presence of the investigator or when the investigator is absent.

Self-administered questionnaires completed in the presence of the investigator contribute to both a high item and questionnaire completion rate as queries can be immediately addressed. Further, it is an efficient way to collect data because questionnaires can be completed quickly from many respondents (e.g. students in a classroom) on one occasion (Cohen et al., 2011). At the same time, it may be time consuming for the investigator who may need to travel extensively, increasing the length of time for data collection. Further, the presence of the investigator may be threatening to some respondents making them feel uncomfortable and pressured to complete the questionnaire at that time (Cohen et al., 2011).

Self-administered questionnaires completed in the absence of the investigator

111

allow respondents to take time to complete questionnaires in private and in familiar surroundings, thereby averting perceived pressure to participate caused by the investigators presence (Cohen et al., 2011). Further, this approach is particularly appropriate if the population of interest is dispersed as it is relatively inexpensive and is more anonymous than when the investigator is present (Phellas, Bloch & Seale, 2011).

At the same time, however, questions or queries cannot be addressed increasing the possibility of questions may be wrongly interpreted or not completed leading to inaccuracy in research findings (Phellas et al., 2011).

Analysis of questionnaire data.

Once questionnaires have been collected, the data generated must be processed and analysed. First, codes are developed and assigned to questionnaire items and the coded data is inputted to the computer. Second, exploratory data analysis using a computer packages, such as SPSS designed to analyse data generated by questionnaires is used to examine the data and determine the appropriate statistical procedures to be used to analyse the data.

Exploratory data analysis

Exploratory data analysis (EDA) is used to examine and explore the nature of data before the application of statistical procedures. Specifically, it involves checking data for, outliers, non-normal distributions, coding issues, missing data and errors in data entry (Morgan et al., 2012). The two most commonly used methods in EDA are graphical displays of data (e.g., boxplots, histograms and stem and leaf plots) and descriptive statistics (e.g., minimum and maximum, mean, standard deviation and skewness) (Morgan et al., 2012).

The first step in EDA is to determine the nature of the data. A starting point for this is to determine the distribution through histograms and use scatterplots to examine

112

relationships between variables.

The next step is to assess the type and potential impact of missing data. Missing data is generally problematic because it may lead to biased results. Hair, Black, Babin,

Anderson & Tatham (2006) suggested a four-stage process for identifying and accommodating missing data. The first stage is to determine the type of missing data, the next is the examine the extent of missing data to determine if data should be excluded, then diagnose the randomness of missing data, and last select an appropriate imputation method for estimating missing data (Hair et al., 2006).

The third step in EDA involves the identification of outliers (observations with unique characteristics distinctly different from other observations). While there are varied reasons for outliers an important objective in EDA is to resolve representativeness and to determine if the observation should be included or excluded from further analysis (Hair et al., 2006).

The fourth step is to check data meets the assumption of proposed statistical tests.

For parametric tests, the most important assumptions include normality, homoscedasticity, linearity and absence of correlated errors (Morgan et al., 2012). Hair et al. (2006) noted some parametric tests are robust to one or two assumption violations, and it was more important for an investigator to understand implications of assumptions and balance the need to satisfy assumptions with robustness and the research context.

The last step in EDA is to determine the measurement properties of the data as this substantially affects what data represent, analysis and application of statistical procedures. Data may be classified as non-metric or metric based on the characteristics represented. Non-metric data shows presence or absence of a characteristic. Non-metric data may be measured on a nominal or ordinal scale. Nominal scales assign numbers to represent the presence or absence of a characteristic and are the lowest level of

113

measurement. Ordinal scales order or rank in regard to the amount or extent of a characteristic and are the next level of measurement (Hair et al., 2006).

Metric data reflect quantity or degree of a particular characteristic and are measured on interval or ratio scales. Interval and ratio scales have constant units of measurement so differences between points on the scale are equal. The main difference between interval and ratio scales is that interval scales use an arbitrary zero point and ratio scales use an absolute zero point (Hair et al., 2006).

Factor analysis

Factor analysis is a technique designed to analyse underlying patterns or relationships among large numbers of variables by defining sets of variables that are highly related, known as factors (Hair et al., 2006). The purpose of factor analysis is to identify relationships among a group of indicators with a smaller set of latent factors.

There are two types of factor analysis, exploratory (EFA) and confirmatory (CFA) factor analysis. EFA is data driven (the factors are derived statistically) and CFA is theory driven (the factors are pre-specified a priori) (Hair, Black, Babin & Anderson,

2014). EFA is exploratory with no specifications made regarding the number of latent factors or patterns of relationships among indicators, whereas in CFA the investigator specifies in advance the number of factors and the pattern of relationships among the indicators based on strong empirical or conceptual a priori evidence (Brown, 2006).

A CFA was used in this study to analyse the questionnaires as there is substantial empirical support for the teaching factors in the DMEE (Creemers & Kyriakides, 2008).

CFA is used to provide a confirmatory test of a measurement theory (Hair et al.,

2014). The measurement theory specifies how indicator variables represent underlying constructs that cannot be measured directly (Hair et al., 2014). Measurement theory links constructs to variables (factor loadings) and constructs to each other (construct

114

correlations).

CFA involves a number of steps including: (1) defining individual constructs, (2) model specification, identification and estimation, (3) assessment of model fit, (4) examination of model fit indices, and (5) model re-specification and testing alternative models (Hair et al., 2014).

Initially, constructs to be included in the measurement model are defined, and carefully checked for construct and content validity, even if constructs are from pre- existing scales. Pre-testing is often undertaken to refine measures before CFA (Hair et al., 2014). The next step is to carefully consider how the constructs come together to form the measurement model. Existing theory and research are used to specify relationships between observed variables and latent variables in the measurement model

(Kline, 2005). Many different relationships may be proposed in a measurement model and different parameters estimated (Kline, 2005). According to Hair et al. (2014) model specification involves consideration of several important issues including, unidimensional measures, covariance between error terms and how many measures per construct. Unidimensional measures are measures which are explained by one underlying construct. They are important when more than one construct is specified in a measurement models. In models that use a set of unidimensional measures and more than one underlying construct, each measure is hypothesised to relate to a single construct and any cross loadings are hypothesised to be zero (Hair et al., 2014). When unidimensional measures are used the existence of cross-loadings is indicative of poor construct validity. Thus, measured variables should be free to load on only one construct (Hair et al., 2014).

Another of relationship is the covariance between error terms of two measured variables. This relationship may occur as within-construct error covariance or between-

115

construct error covariance (Hair et al., 2014). Significant between-construct covariance is evidence of poor discriminant validity and within-construct error covariance is a threat to construct validity (Hair et al., 2014). Thus, paths for within and between error covariance should be specified at zero and not estimated (Hair et al., 2014).

Additional, investigators must decide how many measures are necessary to represent a construct (Hair et al., 2014). According to Hair et al. (2014) good practice dictates a minimum of three measures per construct and preferably four to provide adequate statistical identification. Model identification is related to whether enough information exists to identify a solution to a set of structural equations (Hair et al.,

2014). Models can be distinguished by their degree of identification, which is defined by degrees of freedom after all the parameters that will be estimated have been specified

(Hair et al., 2014). An under-identified model has more parameters to be estimated than observed variable variances and covariance in the variance-covariance matrix. Thus, a solution cannot be found (Hair et al., 2014). A just-identified model has just enough degrees of freedom to estimate all free parameters, but these models do not test a theory as fit is due to circumstance (Hair et al., 2014). Over-identified models have more unique variance and covariance terms that parameters to be estimated and a solution can be found with positive degrees of freedom and appropriate fit statistics (Hair et al.,

2014).

After model identification, the next step is to obtain estimates for parameters in the specified measurement model. The underlying goal is to find estimates which maximise the likelihood of parameters given the data (Brown, 2006). Finding parameter estimates is an iterative process, the computer begins with an initial set of estimates

(starting values) and repeatedly refines the estimates to minimise the difference between actual and estimated models (Brown, 2006). Convergence occurs when the program

116

arrives at a set of parameter estimates that cannot be improved (Brown, 2006).

A number of different estimation methods may be used. For example, maximum likelihood (ML), weighted least squares (WLS), diagonally weighted least squares

(DWLS), generalised least squares (GLS) and unweighted least squares ULS). The most commonly used estimator is ML, but the choice of estimation method will be determined by characteristics of the sample, variables, and normality of the data

(Brown, 2006).

Model fit refers to how similar the pre-specified relationships are to the actual relationships in the data. Data are represented by an actual variance-covariance matrix of measured items and theory is represented by an estimated variance-covariance matrix. Model fit compares the estimated with the actual matrix (Hair et al., 2014).

More similarity between matrices indicates good model fit that is, the measurement theory provides a good account of the actual relationships in the data. Good model fit implies plausibility, but not correctness.

Guidelines for fit proposed by Hair et al. (2014) involve assessment of all aspects of construct validity using several different measures, such as path estimates, construct validity and model diagnostics. Path estimates are measures of the link between constructs and indicator variables. A rule of thumb is for path estimates to be at least .5 and preferably .7 or higher. Path estimates of this size indicate strong relationships and construct validity. Statistical significance of each path estimate should also be assessed

(Hair et al., 2014). Hair et al. (2014) suggested non-significant indicators should be dropped and cautioned estimates at significant levels (p < .01) may seem impressive, but may load below .5 and should be considered for deletion. Additionally, Hair et al.

(2014) recommended examining path estimates to make sure loadings were reasonable.

For example, standardised loadings above 1.0 or below -1.0 are out of feasible range

117

and are indicative of a problem in the model.

In the process of conducting a CFA, model diagnostics are provided to address problems or improve model fit. Standardised residuals and modification indices provide diagnostic cues that are useful for identifying problems (Hair et al., 2014). Residuals are included in the standard output from most structural equation modelling (SEM) programs (e.g., Lisrel 9.1, AMOS, MPlus). Residuals are individual differences between the actual covariance and estimated covariance. The smaller the residual, the better is the fit of the model. The standardised residuals are the raw residuals divided by the standard error of the residual, and as they are not dependent on the measurement scale range, they are useful in diagnosing problems (Hair et al., 2014). Generally, standardised residuals less than │2.5│are not indicative of problems, but residuals greater than │4.0│, p < .001 suggest the indicator should be dropped. Hair et al. (2014) recommended standardised residuals between │2.5│and │4.0│need to be addressed, but may not need to be dropped if there are no other problems associated with those indicators.

A modification index is calculated for each relationship not estimated in a model

(Hair et al., 2014). Generally, modification indices of four or greater indicate model fit would be improved by freeing the corresponding path estimated (Hair et al., 2014).

However, Hair et al. (2014) cautioned against re-specifying models based on modification indices as it is inconsistent with the theoretical basis of CFA. They further suggested using modification indices as a guideline for improving model fit, if there was theoretical justification (Hair et al., 2014).

A number of different model fit indices provide different diagnostic information for examining model fit. Model fit indices can be classified into three types, absolute fit, fit adjusting for parsimony and comparative or incremental fit (Brown, 2006). Absolute

118

fit indices determine how well the measurement model fits the data (McDonald & Ho,

2002). Absolute fit indices include the chi-square statistic (χ2) which assesses the size of the discrepancy between actual and estimated covariance matrices. Good model fit is indicated by a non-significant result at p < .05. However, χ2 is limited because it assumes multivariate normality (McIntosh, 2006) and is sensitive to sample size

(JÖreskog & Sorbom, 1993). Another absolute fit index is the standardised root mean square residual (SRMR) which is the square root of the difference between the residuals of the actual and estimated covariance matrix (Hooper, Coughlan & Mullen, 2008).

Values for the SRMR can range from ‘0’ to ‘1.0’, with good fit indicated by values less than 0.5 (Brown, 2006).

Although, root mean square of approximation (RMSEA) is often grouped with absolute fit indices, it differs because it includes a function for poor model parsimony

(Brown, 2006). The RMSEA assesses the extent to which a model fits reasonably well in a population (in contrast to whether it fits exactly) (Brown, 2006). Values for

RMSEA range from ‘0’ to ‘1.0’ and lower values indicate better fit (Brown, 2006).

Comparative or incremental fit indices evaluate the fit of the estimated model in relation to a null or baseline model in which no relationships are specified (Brown,

2006). For example, the comparative fit index (CFI) takes into account the sample size and assumes all latent variables are uncorrelated and compares the actual covariance matrix with the null model (Hooper et al., 2008). The values for the CFI range from ‘0’ to ‘1.0’, with higher values indicating better model fit (Brown, 2006).

Although there is much debate about the goodness of fit indices, researchers have developed guidelines for interpreting fit indices (Brown, 2006). The most recent (Hu &

Bentler, 1999) suggested reasonably good model fit is obtained when: (1) SRMR values are close to or below .08, (2) RMSEA values are close to or below.06, and (3) CFI

119

values close to or above .95.

In terms of reporting, researchers have suggested various combinations of fit indices. For example, Hu & Bentler (1999) suggested a two-index format presentation, while Kline (2005) contended χ2, RMSEA, CFI, and SRMR should all be reported.

Based on Hu & Bentler (1999) and Kline (2005) recommendations, Hooper et al.,

(2008) proposed reporting the χ2, degrees of freedom and p-value, the RMSEA and confidence level, the SRMR and the CFI as the most sensible approach.

Model re-specification influences the underlying theory on which a model is based, and is not consistent with the theoretical basis of CFA (Hair et al., 2014). If changes are minor the theoretical integrity of the model may not be affected and research may proceed, but if major changes are made, measurement theory must be modified resulting in a new theory that will need to be tested with a different set of data

(Hair et al., 2014). Hair et al. (2014) proposed if more than 20% of the indicators are deleted then the change cannot be considered minor and should be re-tested with a different data set. At the same time, Hair et al. (2014) acknowledged the most common re-specification would be the deletion of an indicator that does not perform well in a model. However, decisions about whether to drop the indicator were not straightforward, and theory should guide model re-specification (Hair et al., 2014).

Summary of research procedures in this study

The actual research process and procedures of the study are summarised in the flowchart depicted in Figure 4.1. In the flowchart, the study has two phases and a non- experimental sequential mixed methods approach was adopted (see Figure 4.1). In the first phase, a cross-sectional cohort research design was used, a purposive sample was obtained from a secondary data source of BISE student test performance scores in

Dhaka, Bangladesh and the scores were analysed using a value-added method to answer

120

Figure 4.1. Flow chart of research processes in the study

121

the first research question (i.e., What are the 20 highest performing secondary schools in mathematics within Dhaka Metropolitan City (DMC), Bangladesh?).

The findings from phase one informed the second phase of the study (see Figure

4.1). In the second phase, the teacher and student sample were obtained from the 20 highest performing secondary schools in mathematics within DMC identified through value-added methods in phase one of the study, and a two-step purposive sampling strategy and multiple methods, (i.e., observations, interviews, and questionnaires) were utilised to collect the data. The flow chart in Figure 4.1 shows the procedures adopted to collect data from teachers and her/his students were first, to observe and video record the lesson, then administer the questionnaire to students, and last, conduct the interview with the teacher and distribute a questionnaire to be completed within one week. Figure

4.1.shows that a large amount of data was generated from the multiple sources and methods of data collection enabling the researcher to triangulate the findings obtained through the data analysis procedures recommended by qualitative and quantitative experts (e.g., Cohen et al., 2014; Miles et al., 2014).

Summary

This chapter has provided details about the research methodology employed to investigate the research questions posed in this study. The next chapter reports and discusses the results of the study.

122

Chapter 5 Results and Discussion

The purpose of this chapter is to report and discuss the results from phase one and phase two of the investigation. Specifically, this chapter describes the population, sampling, data collection and analysis procedures, reports the results from both phases of the study, and discusses the results in the context of the relevant research question.

Phase one results

Initially, permission was obtained from university authorities (Appendix A3), and the Chairman of the Board of Intermediate and Secondary Education (BISE), Dhaka to conduct the first phase of the study (Appendix A4).

Population and sample selection.

The data obtained through the BISE, Dhaka comprised a database of student mathematics grades in the Junior Secondary Certificate (2010) and the Secondary

School Certificate (2013). Specifically, the BISE, Dhaka database included the mathematics grades for all students who completed the JSC (2010) and SSC (2013) and attended the 5,867 secondary schools (BANBEIS, 2012) administered by the BISE,

Dhaka. As the student population in these databases was very large, a purposive sample comprised of the mathematics grades of all students who attended the 380 schools within the Dhaka Metropolitan City (DMC) only were extracted from the BISE database for the JSC (2010) and SSC (2013).

To extract the sample, the data were transferred to a Microsoft Excel (2010) spreadsheet, students were grouped by the secondary school attended, and the schools were grouped according to the geographical location within the BISE, Dhaka. This enabled the researcher to identify the sample of secondary schools located within the

DMC for further analysis. Thus, the sample in phase one comprised the mathematics grades for students in the JSC (2010) and SSC (2013) who attended the 380 high

123

schools in the DMC.

Validity and reliability of JSC and SSC data.

There are a number of reasons why the BISE, Dhaka database of student mathematics grades for JSC (2010) and SSC (2013) used in the analysis provide a source of valid and reliable data. First, the BISE, Dhaka is responsible for and administers the JSC and SSC public examination. The BISE uses rigorous procedures in the administration and analysis of student papers. For example, the BISE selects a number of schools to act as examination centres and directs students’ to attend an examination centre which is not the school attended. All students are given a registration number. Invigilators include teachers from the school nominated as an examination centre and external BISE supervisors monitor examination activity at each centre. Once the examination is over, identifying student information is separated from answer sheets (which include student registration numbers), and sent to the Dhaka BISE following strict administrative procedures and processes. Student papers are marked and graded. Marks ranging from 0-100 are converted into grades, ranging from F to A⁺ (see

Table 5.1). Individual subject grades are averaged, and each student receives a grade point average (GPA) which ranges from 0 to 5 and the result is published officially.

Preparation of JSC and SSC data.

Once the sample of schools, (and student grades) was identified, a number of steps were required to prepare the data for the value-added analysis. The first step required manually working through the data (51,541 students) to match student cohort data for the JSC (2010) with the SSC (2013). To facilitate matching of student cohort data, the BISE student registration number for external examinations was used. This registration number is used by students when they sit the JSC and SSC examinations.

The matching process yielded a total of 51,434 students in the cohort for JSC (2010)

124

and the SSC (2013) who attended the 380 schools in the DMC. Thus, there were 107 unmatched students who did not complete the SSC (2013).

The next step required the conversion of student grades in mathematics in the JSC

(2010) and SSC (2013) to coded scores. The seven grades A+, A, A-, B, C, D, and F were coded as 5, 4, 3.5, 3, 2, 1, and 0 respectively, as shown in Table 5.1.

Table 5.1

Individual subject grade, score range, and grade point average Grade Range of scores GPA A+ 80-100 5 A 70-79 4 A- 60-69 3.5 B 50-59 3 C 40-49 2 D 33-39 1 F 00-32 0

Value-added analysis.

Value-added analysis was conducted to identify the twenty highest performing secondary in the DMC, Bangladesh. Value added procedures consistent with the

Education Value-Added Assessment System (EVAAS) approach were adopted. The value added-analysis was conducted using Microsoft Excel (2010) and the steps are outlined below.

The first step was to calculate school means of student mathematics scores in the

JSC (2010) and SSC (2013) for each of the 380 schools. The results of this step for the

20 highest performing secondary schools are displayed in columns (2) and (3) in Table

5.2. The second step was to obtain the mean gain in mathematics scores for each school over two years [i.e., JSC (2010) to SSC (2013)], the mean of student mathematics

8 scores in each school in the JSC (2010) shown in Table 5.2 as (푦2010) were subtracted from mean of student mathematics scores in each school in the SSC (2013) shown in

125

10 Table 5.2 as (푦2013). It should be noted, in Bangladesh, students passing the JSC examinations continue with a further two years of schooling and sit the SSC examinations at the beginning of the following year (see pp. 9-12 for more details).

Thus, the equation for this calculation was:

10 8 (푦2013 − 푦2010)

The results of this step are shown in Table 5.2 (column 4).

In the third step, the mean of student mathematics scores in the Dhaka

Metropolitan (DMC) in the JSC (2010) and SSC (2013) were calculated. As before, the mean gain in mathematics scores was calculated by subtracting the mean in the JSC

(2010) from the SSC (2013) in the DMC. This is represented by the equation:

10 8 (푏2013 − 푏2010)

The fourth step was to calculate the value-added score for each school by subtracting the DMC gain from the gain of each school. This calculation is represented by the equation:

10 10 8 10 8 푢2013 = (푦2013 − 푦2010) − (푏2013 − 푏2010)

10 In this equation, 푢2013is the value-added score. The value-added scores for the

380 secondary schools are displayed in Table C1 in Appendix C. Table C1 shows that value-added scores for the 380 schools ranged from 1.91 to -2.58, and descriptive analysis revealed the mean value-added score was .24, with a standard deviation of .56.

Closer examination of Table C1 showed 209 schools were value-adding above the mean

(i.e., M > 0.24), and 171 schools were value-adding below the mean (i.e., M < 0.24).

Figure C1 (see Appendix C) shows the frequency distribution of value added scores for the 380 secondary schools. It can be seen from Figure C1 that the distribution of scores is negatively skewed and leptokurtic, and there are several outliers (e.g., school ranked 380).

126

The last step in the value-added analysis was to rank each school according to value-added scores from highest to the lowest, identify the 20 highest performing schools (assign pseudonyms to maintain confidentiality), and display JSC and SSC data, school gains and value added scores in Table 5.2 for further analysis in phase two of the study.

Table 5.2

Twenty highest performing secondary schools in mathematics within DMC based on value-added scores (JSC, 2010 - SSC, 2013). School School School School DMC Value School Name Mathematics Mathematics gain gain Added Rank 10 10 Performance Performance (푦2013 (푏2013 − Score (VAA) 8 8 10 SSC 2013 JSC 2010 − 푦2010) 푏2010) (푢2013) 10 8 (푦2013) (푦2010) (1) (2) (3) (4) (5) (6) (7) Seam 4.61 1.87 2.74 0.83 1.91 1 Tasin 3.63 1.20 2.43 0.83 1.60 2 Nakib 4.43 2.02 2.41 0.83 1.58 3 Chadni 4.88 2.50 2.38 0.83 1.55 4 Abida 4.07 1.79 2.28 0.83 1.45 5 Hamida 3.90 1.76 2.14 0.83 1.31 6 Kanta 4.42 2.30 2.12 0.83 1.29 7 Drishti 3.43 1.33 2.10 0.83 1.27 8 Khan 4.09 2.05 2.05 0.83 1.22 9 Sumon 3.53 1.52 2.01 0.83 1.18 10 Tinni 3.31 1.34 1.97 0.83 1.14 11 Mahbub 3.94 1.98 1.96 0.83 1.13 12 Refat 3.80 1.85 1.96 0.83 1.13 13 Megh 3.98 2.03 1.95 0.83 1.12 14 Aruna 4.64 2.69 1.95 0.83 1.12 15 Protik 4.23 2.28 1.95 0.83 1.12 16 Angela 4.03 2.11 1.92 0.83 1.09 17 Raihan 4.56 2.64 1.91 0.83 1.08 18 Nipa 3.88 1.97 1.91 0.83 1.08 19 Shahid 3.67 1.77 1.90 0.83 1.07 20

127

It can be seen from Table 5.2 that value-added scores of the twenty highest performing schools ranged from 1.91 to 1.07. Descriptive analysis showed mean value- added score of 1.27, and standard deviation equal to .23. Comparison with the value added scores of each school (see Table 5.2) indicated eight of the 20 highest performing schools value-added above the mean (i.e., M ≥ 1.27) and 12 were value-adding below this level.

Table 5.3 displays minimum and maximum value-added scores, means and standard deviation for the 380 schools and the 20 highest performing secondary schools.

Table 5.3 suggests a wide gap in the mean value-added score (a difference of 1.03) between the mean of the 380 schools (M =.24) compared to the mean of the 20 highest perfroming schools (M = 1.27).

Table 5.3

Descriptive statistics of the 380 secondary schools and the 20 highest performing secondary schools in mathematics within DMC. Schools Minimum Maximum Mean value Standard added score deviation 380 schools -2.58 1.91 .24 .56 20 highest schools 1.07 1.91 1.27 .23

Characteristics of twenty highest performing secondary schools.

Table 5.4 summarises the demographic characteristics of the twenty highest performing secondary schools in mathematics with the DMC. It should be noted all top performing schools were private schools. Table 5.4 shows most of these schools catered for grades one to ten (n = 13), and the minority catered for grades six to ten (n = 3), and one school catered for grades one to twelve (n = 4). No patterns are evident in terms of the size of the schools. As can be seen in Table 5.4, four schools have more than 1000 students, nine schools have less than 500, and seven schools have more than 500 but less than 1000 students. Table 5.4 shows most secondary schools have less than five

128

mathematics teachers (n = 17) and three have more than five teachers. While Table 5.4 suggests congruence between the size of the student population and the number of mathematics teachers in schools, the exception is Kanta secondary school which had a student population of more than 1000 and two mathematics teachers.

Table 5.4

Characteristics of twenty highest performing secondary schools in mathematics within DMC School School Secondary Students Secondary Teachers Grades (number) (number) Male Female Total Mathematics Total Seam 1-10 500 0 500 4 18 Tasin 1-10 400 350 750 9 27 Nakib 1-12 795 275 1070 8 42 Chadni 6-10 100 150 250 2 12 Abida 1-10 0 550 550 3 15 Hamida 1-10 0 780 780 3 18 Kanta 1-10 453 706 1159 2 27

Drishti 1-10 68 12 80 1 10 Khan 1-12 550 450 1000 5 22 Sumon 1-10 166 157 323 5 13 Tinni 6-10 220 300 520 3 12 Mahbub 1-10 234 249 483 5 16 Refat 1-10 285 215 500 2 24 Megh 1-10 0 250 250 2 14

Aruna 1-12 0 350 350 3 11 Protik 1-10 289 360 649 1 12

Angela 1-10 0 1268 1268 8 60 Raihan 6-10 292 360 652 4 18 Nipa 1-12 514 359 873 4 42

Shahid 1-10 0 195 195 2 11 Notes. School type,1-10 refers to combined primary and secondary grades, 6-10 refers to secondary grades only, 6-12 refer to combined secondary & higher secondary grades and 1-12 refer to combined primary, secondary and higher secondary grades.

129

Phase two results

Prior to commencing phase two, emails were sent to authors requesting permission to use instruments and once approval was given (Appendix A1), permission to conduct the study was obtained from university authorities (Appendix A5), and the

Director General of the Directorate of Secondary and Higher Education (DSHE),

Dhaka, Bangladesh (Appendix A7).

Population and sample selection.

A non-probability sampling strategy was adopted to select a sample of schools from the twenty-high performing secondary schools (see Table 5.4) identified from the value-added analysis in phase one. Recruitment emails (Appendix A8) were sent to the principal of each of the top 20 performing secondary schools requesting school participation in the study. Additionally, the email requested names and contact details of mathematics teachers, teaching year 9 and year 10 mathematics prior to/or since 2010 in the school who would be willing to participate in the second phase of the study. All 20 principals agreed to their school participating in the study. However, only 15 of these schools had the mathematics teachers who met the length of service and year 9/10 teaching criteria stipulated in the recruitment email (Appendix A8). Further contact was made with these 15 schools to ascertain their participation. Once the school had agreed to participate, a recruitment email (Appendix A9) was sent to the mathematics teacher(s) who matched the length of service and year 9/10 teaching criteria. Fifteen mathematics teachers from eight schools agreed to participate in the study. This represented a teacher response rate of 94% and a school response rate of 53%. The final sample comprised 15 mathematics teachers (who met the length of service and year

9/10 teaching criteria) from eight of the top 20 performing secondary schools identified in phase one.

130

Preliminary data analysis.

To examine non-observable teaching characteristics (e.g., experience, qualifications, self-efficacy beliefs), a questionnaire translated into Bengali (see

Appendix B3) was distributed to teachers before the teaching observation. All teachers returned completed questionnaires (100% response rate) within ten days. Completed questionnaires were coded and checked, and a data file was created in SPSS 22.0 and checked for missing values and coding errors before analysis.

Demographic characteristics of teachers.

Table 5.5 displays the demographic characteristics (i.e., age, gender, experience, qualifications) of the teachers who participated in the study. It should be noted that pseudonyms have been used for teachers to maintain confidentiality.

Table 5.5

Teacher demographic characteristics (n=15)

School School Teacher Sex Age Experience Qualifications Rank Name Name (years) (years) 1 Seam Shapla M 37 14 MPhil(Education)/MSc 2 Tasin Moni M 45 20 BSc 2 Tasin Nila M 43 22 MSc 2 Tasin Adnan M 45 21 MSc 3 Nakib Saddam M 50 26 MA(Islam history)/BSc 3 Nakib Angel M 47 21 BSc(Hons) 3 Nakib Momota F 54 25 MSc(Chemistry) 4 Chadni Bela F 36 10 MSc(Chemistry) 6 Hamida Hazzaz M 46 21 MSc 6 Hamida Antu M 39 15 MSc(Physics) 7 Kanta Bindu M 50 22 BSc 12 Mahbub Babu M 42 9 BSc(Hons) 15 Aurona Priya M 44 19 MSc 15 Aurona Kazol M 50 27 BSc 15 Aurona Shilpo M 50 22 BSc Notes. M refers to males, F refers to females, BSc refers to Bachelor of Science, MSc refers to Master of Science, MA refers to Master of Arts, MPhil refers to Master of Philosophy, Hons refers to Honours.

131

The frequency distributions of teacher demographic characteristics are summarised in Table D1.1. in Appendix D1. Examination of Table D1.1 shows most teachers were male (87%), with about two-thirds aged 36-49 years (67%) and one third aged 50-54 years (33%). The teachers have a range of experience (9-27 years), and most

(53%) have between 16-22 years of teaching experience. In addition, Table D1.1 (see

Appendix D1) indicates 60% of teachers have Master’s qualifications, 23% have undergraduate qualifications and 13% have attained honours at this level. It should be noted teachers must complete the one-year Bachelor of Education degree. In summary, the results in Table D1.1 suggest teachers are both experienced and highly qualified practitioners.

Self-efficacy beliefs of teachers.

The raw data for teacher self-efficacy beliefs related to teaching behaviours (i.e., orientation, structuring, modelling, practice, questioning and classroom learning environment [teacher-student interaction; competition, cooperation, group work; and misbehaviour]) is shown in Table D1.2 (see Appendix D1). The descriptive statistics for teacher self-efficacy beliefs related to teaching behaviours are displayed in Table 5.6.

Table 5.6

Descriptive statistics for self-efficacy beliefs (n=15) Level of self-efficacy Teaching behaviour Minimum (%) Maximum (%) Mean (%) SD Orientation 80 100 96 7.4 Structuring 80 100 93 7.0 Modelling 67 100 88 9.4 Practice 70 100 94 9.1 Questioning 40 93 74 19.1 CE (I) 55 95 77 13.0 CE (CCG) 80 100 90 6.1 CE (MB) 67 97 89 9.4 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour)

132

Generally, the results in Table 5.6 suggest higher levels of self-efficacy for orientation, structuring, modelling, practice, and CE (CCG) and (MB), than for questioning and CE(I), which additionally have noticeably larger standard deviations than the other teaching behaviours. It is can be seen from Table 5.6 that in comparison to other teaching behaviours, teachers may be less confident with questioning and CE

(I) behaviours.

Teaching observations.

This section comprises two parts; the first part reports the procedures and results about quantitative characteristics of teaching behaviours, and the second part reports the procedures and results about the qualitative characteristics.

Data collection procedures.

Prior to conducting teacher observations, the researcher provided each teacher with information on the background and purpose of the study, observation procedures and confidentiality and anonymity provisions including written consent forms for teacher (Appendix A10), parent/guardian consent forms for students (Appendix A11), and arranged with the teacher a convenient time to conduct the observations. Completed consent forms (teachers and students) were collected by the researcher before observations were conducted. The teaching observation procedures required the researcher to video record each teacher, teaching one lesson (the lesson length ranged from 28-53 minutes) to a group of year 9 students. A digital video recorder was used to record teacher behaviours for accuracy in data collection. Also of note is that, while all teachers were observed teaching new content to classes from the year 9-10 national mathematics curriculum, the topics covered were different. Appendix D2 shows the specific topics taught by teachers during observations.

133

Data analysis procedures.

The preliminary analysis involved two steps. In the first step, the researcher viewed the video recording of each lesson (several times), adapted the codes (where necessary) of the observation protocols (low-inference observation instrument one

[LIO1], and low-inference observation instrument two [LIO2]) (Creemers &

Kyriakides, 2012). Codes were assigned for the five dimensions (frequency, stage, focus, quality and differentiation) of observed teaching behaviours and recorded on an

Excel spreadsheet, using the two observation protocols (LIO1 and LIO2). The LIO1 and

LIO2 were designed to generate data on the five dimensions of six effective teaching behaviours (orientation, structuring, modelling, practice, questioning, classroom environment [interaction] and [behaviour]) distinguished in the dynamic model of educational effectiveness (Creemers & Kyriakides, 2012). Classroom environment

(behaviour) was further sub-divided into positive and less positive elements of student behaviour. Specifically, classroom environment (CCG) included encouraging positive competition, cooperation, and group work among students and classroom environment

(MB) included dealing with misbehaviour.

The guidelines developed by Creemers & Kyriakides (2012) for using the LIO1 and LIO2 were adopted by the researcher. In addition, the researcher trained a peer

(who had agreed to view the recorded teaching observations) to assign codes using the

LIO1 and LIO2 observation protocols. The researcher’s peer viewed recordings of seven teachers, separately assigning codes using the LIO1 and LIO2. The consistency of researcher and peer ratings using the LIO1 and LIO2 was determined using the kappa statistic. The inter-rater reliability was found to be kappa = 0.72 (p < .001). Although, not overly high the kappa statistic is >.7 and represents a good level of agreement

(Elliott & Woodward, 2007). The coding sequences and descriptors used for the

134

qualitative characteristics (stage, focus, quality and differentiation dimensions) of teaching behaviours are shown in Appendix D3.

In the second step, the codes for teaching behaviours were entered into an Excel spreadsheet, which contained the corresponding, and previously coded school and teacher demographic data. As the length of time for each teacher observation ranged from 28 to 53 minutes (due to differences in lesson times between the schools), the raw coded data for the frequency dimension was standardised to 40 minutes, using a

Microsoft Excel spreadsheet, to facilitate further data analysis. The coded teaching observation data were checked for errors and imported to SPSS 22.0 for further analysis.

Coding of the five dimensions (frequency, focus, stage, quality, differentiation) of the teaching behaviours (orientation, structuring, modelling, practice, questioning, classroom environment [interaction], [CCG], [MB]), generated a large volume of categorical data. Thus, descriptive and non-parametric data analysis strategies were employed, and these were performed using SPSS 22.0.

Determining the quantitative characteristics of teaching behaviours.

A key assumption of the dynamic model (Creemers & Kyriakides, 2008), is that the effectiveness of teaching behaviours can be defined and measured in terms of five dimensions; frequency, focus, stage, quality, and differentiation. Frequency is used quantitatively to determine the effectiveness of teaching behaviours (Creemers &

Kyriakides, 2012). Frequency was determined by noting the number of tasks associated with a teaching behaviour, and recording the time (in minutes) each task took in the lesson (Creemers & Kyriakides, 2012). For questioning, wait time was recorded separately and combined with the duration of other questioning tasks (Creemers &

Kyriakides, 2012).

135

Table 5.7 and Table 5.8 display the raw data on the number of tasks and duration of tasks (in minutes) of each teaching behaviour. It should be noted that for structuring, frequency was calculated excluding the frequency of ‘transition’ (i.e. moving from easy to difficult content) as this was considered to be an automatic part of structuring.

Table 5.7 Number of teaching tasks (counts) School Teacher Orient Structure Model Practice Question CE CE CE (I) (CCG) (MB) 1 Shapla 2 2 2 1 0 4 1 0 2 Moni 9 2 8 0 0 4 0 0 2 Nila 3 7 11 1 17 8 2 0 2 Adnan 3 4 5 2 0 3 2 0 3 Saddam 9 1 4 3 7 5 0 0 3 Angel 2 1 3 1 0 3 0 0 3 Momota 3 4 1 1 7 4 0 1 4 Bela 1 1 4 4 0 2 0 0 6 Hazzaz 3 3 4 1 6 4 0 0 6 Antu 3 1 3 2 4 2 1 0 7 Bindu 1 1 5 4 1 4 1 0 12 Babu 3 2 4 1 0 3 2 0 15 Priya 3 1 4 1 6 5 7 0 15 Kazol 1 0 3 3 5 6 0 0 15 Shilpo 1 3 2 1 3 3 2 0

Preliminary analysis.

Several patterns evident in Table 5.7 and Table 5.8 are worth noting. First, most teachers (i.e., ≥ 14) used tasks associated with five teaching behaviours (orientation, structuring, modelling, practice, classroom environment [interaction], and classroom environment [competition, cooperation, group work]) distinguished in the dynamic model (Creemers & Kyriakides, 2008).

Second, very few teachers engaged in tasks associated with CE (misbehaviour)

136

(see Table 5.7). A possible explanation for the absence of tasks associated with CE

(misbehaviour) is that in Bangladesh, the teacher’s authority is accepted and it is culturally unacceptable for a student to engage in classroom misbehaviour. This finding is consistent with other research (Edwards, Malik & Haase, 2010; Khanum, 2014), which has reported that in parts of Bangladesh, teachers have absolute authority in the classroom, students show great respect to the teacher and students do not consider challenging a teachers in public to be a good habit.

Table 5.8

Duration of teaching tasks (in minutes) School Teacher Orient Structure Model Practice Question CE CE CE (I) (CCG) (MB) 1 Shapla 2.0 0.5 25.0 6.0 0.0 3.8 0.1 0 2 Moni 14.0 4.0 19.0 0.0 0.0 4.8 0.0 0 2 Nila 1.0 5.5 25.0 1.0 5.0 5.1 0.3 0 2 Adnan 1.5 8.0 18.0 12.0 0.0 3.8 2.0 0 3 Saddam 14.0 0.5 10.0 3.0 2.0 6.4 0.0 0 3 Angel 2.0 1.0 15.0 11.0 0.0 4.9 0.0 0 3 Momota 7.0 2.0 10.0 10.0 4.0 4.6 0.0 7.2 4 Bela 1.5 1.0 11.0 22.0 0.0 4.6 0.0 0 6 Hazzaz 2.0 1.0 23.0 7.0 2.0 3.8 0.0 0 6 Antu 2.0 1.0 21.0 10.0 1.0 4.7 0.5 0 7 Bindu 3.0 0.3 22.0 9.0 0.2 4.9 0.2 0 12 Babu 4.5 1.0 14.0 14.0 0.0 4.7 0.3 0 15 Priya 6.0 0.6 23.0 4.0 4.0 4.9 2.0 0 15 Kazol 7.0 0.0 15.0 12.0 1.0 5.2 0.0 0 15 Shilpo 0.4 2.0 23.0 6.0 1.0 4.2 0.4 0 Note. Orient refers to orientation, CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE(MB) refers to classroom environment(misbehaviour).

Third, Table 5.7 suggests there were differences between teachers, in terms of the number of tasks associated with teaching behaviours. For example, Nila used 17 tasks associated with questioning, while Shapla used none. Last, some tasks and therefore,

137

teaching behaviours were observed more frequently than others. In particular, teachers were observed to engage in modelling more frequently (Table 5.7), and for a longer part of the lesson (Table 5.8) than any other teaching behaviour. The exception appears to be

Bela, who frequently used more practice activities (see Table 5.7), which took up longer periods of time in the lesson (see Table 5.8).

Descriptive statistics.

Descriptive statistics were computed in SPSS 22.0 for the number and duration of teaching tasks. Table 5.9 displays the results and shows several interesting patterns in

Table 5.9

Descriptive statistics for number and duration of teaching tasks Task number Task duration (minutes) Teaching behaviour Min Max Mean SD Min Max Mean SD

Orientation 1 9 3.1 2.5 0.4 14 4.5 4.4 Structuring 0 7 2.2 1.8 0 8 1.9 2.2 Modelling 1 11 4.2 2.5 10 25 18.3 5.4 Practice 0 4 1.7 1.2 0 22 8.5 5.6 Questioning 0 17 3.7 4.6 0 5 1.4 1.7 CE (I) 2 8 4.0 1.6 3.8 6.4 4.7 0.7 CE (CCG) 0 7 1.2 1.8 0 2 0.4 0.7 CE (MB) 0 1 .07 .3 0 7.2 .5 1.9 Notes. Min refers to minimum, Max refers to maximum, SD refers to standard deviation, CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour)

teaching behaviours. First, teachers were engaged in the full range of teaching behaviours identified in the dynamic model (Creemers & Kyriakides, 2008). However, a comparison of means in Table 5.9 indicates teachers engaged in some teaching behaviours more often, and for longer periods of time, than others. For example, teachers most frequently used modelling behaviour (task number, M = 4.2, SD = 2.5) which took an average 18 minutes of a 40-minute lesson (M = 18.3, SD = 5.4). In contrast, teachers infrequently had to deal with misbehaviour (task number, M = .07, SD

138

= .3, task duration M = .5, SD = 1.9), set very few practice activities (M = 1.7, SD =

1.2), and rarely encouraged students to engage in positive competition, cooperation or group work (M = 1.2, SD = 1.8). Second, Table 5.7 shows wide variation in the number of questioning tasks (M = 3.7, SD = 4.6) among the 15 teachers. However, close examination of Table 5.9 suggests this may be explained by one teacher, Nila using more questions (17) (see Table 5.7), than other teachers, if Nila is removed from the analysis, variation is greatly reduced (M = 2.8, SD = 2.9).

Frequency distributions.

The frequency distributions for the number and duration of teaching tasks are displayed in Table D5.1 and D5.2 (see Appendix D5). These results show in regard to:

Orientation.

Most teachers (87%) provided up to and including four orientation activities which took a maximum of eight minutes of total lesson time (Table D5.1 and Table

D5.2). In contrast, a few teachers (13%) provided nine orientation activities which took

14 minutes of total lesson time.

Structuring.

Table D5.1 and Table D5.2 show most teachers (60%) engaged in one to two structured activities which took up to two minutes of the lesson. In contrast, one teacher

(7%) was observed to engage in seven structuring activities which took between 6-8 minutes of lesson time.

Modelling.

All teachers engaged in modelling behaviours (i.e., explaining or presenting new content and/or showing students how to solve problems), and most teachers (86%) were observed to engage in modelling behaviour up to six times for 10-20 minutes of the lesson (see Table D5.1 and D.5.2).

139

Practice.

Table D5.1 shows most teachers (84%) provided between zero and two practice activities, but Table D5.2 indicates 74% of practice activities took between 6 to 15 minutes of the lesson. Thus, while practice activities were few in number, a substantial part of the lesson provided opportunities for students to engage in practice activities.

Further, Table D5.2 shows with the exception of modelling, most lesson time was taken up with practice activities. The wide range of practice activities, in terms of time (i.e., 0-

23 minutes) is largely attributable to one teacher (7%) whose practice activities accounted for a large part of the lesson.

Questioning.

Table D5.1 shows 40% of teachers did not use questioning, while 53% used between one and eight questions, and one teacher (7%) used eighteen questions. Even though most teachers (60%) did use questioning, Table D5.2 suggests it took minimal lesson time.

Classroom environment (Teacher-student interaction).

Table D5.1 indicates that up to eight teacher-student interactions were observed in all classrooms, and Table D5.2 shows teacher-student interactions ranged from four to eight minutes in terms of lesson time.

Classroom environment (Competition, cooperation, group work).

Table D5.1 shows eight teachers (53%) encouraged positive competition, cooperation, and group work among students. However, Table D5.2 indicates teachers were observed to engage in these behaviours for less than 3 minutes of the lesson.

Further, Table D5.2 reveals encouraging positive competition, cooperation, and group work among students was the second least observed teaching behaviour.

140

Classroom environment (Misbehaviour).

Table D5.1 shows the majority of teachers (93%) were not observed to engage in teaching behaviours associated with student misbehaviour, and not surprisingly Table

D5.2 indicates most teachers (93%) do not spend any lesson time dealing with it.

Extent to which teaching behaviours are used in lessons.

To determine the extent to which teachers used teaching tasks related to teaching behaviour, the mean task number (see Table 5.9) of the group (n = 15) was compared to task number (related to teaching behaviour) utilised by each teacher (Table 5.7). The results displayed in Figure 5.1 show the percentage of teachers who utilised teaching tasks (related to teaching behaviour) equal to, or above the mean task number of the 15

70 60 60 60

60 53 53

50 47 40 40

30

20 7

Percentageteachersof 10

0

Teaching behaviours

Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour)

Figure 5.1. Percentage of teachers displaying the mean or above teaching behaviours.

teachers observed. Figure 5.1 suggests more than half of the teachers (60%) displayed the mean (or higher) number of task related to orientation, modelling, and classroom environment (interaction) behaviours, just over half engaged in structuring and

141

encouraging competition, cooperation, and group work (53%) behaviours, while less than half displayed questioning (47%) and practice behaviours (40%), and very few were observed dealing with student misbehaviour (7%).

Further, the mean duration of teaching behaviours (Table 5.8) was calculated as a percentage of the available teaching time. The results displayed in Figure 5.2 indicate modelling (46%), practice (21%), teacher-student interactions (12%) and orientation

(11%), structuring (5%), and questioning (3%) behaviours accounted for most of the available teaching time. Further, teachers allocated the least amount of teaching time to encouraging competition, cooperation and group work (1%) and dealing with misbehaviour (1%).

CE (MB), 1% CE (CCG), 1%

Questioning, Orientation, 3% CE (I), 12% 11%

Practice, 21% Modelling, 46%

Structuring, 5%

Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour)

Figure 5.2. Teaching tasks as a percentage of teaching time.

Contingency table analysis.

Chi-square statistics and contingency tables were used to analyse the associations between several of the categorical variables. Initial steps for all analysis, showed data violated one of the conditions for chi-square and contingency table analysis (i.e., an expected count of at least five in each cell), therefore, cells were combined as

142

recommended by experts in the literature (e.g., Spatz, 2011). A number of steps were required to combine the cells, for task frequency. First, the number of tasks (Table 5.7) was compared to the mean for the number of tasks associated with each of the teaching behaviours (Table 5.9). Next, codes were used to categorise teachers into two categories. A code of ‘one’ was given to teachers who displayed the number of tasks associated with each teaching behaviour, ‘below or equal to the mean number of tasks’, and a code of ‘two’ was assigned to teachers who performed tasks ‘above the mean number of tasks’. Last, the number of teachers coded ‘one’ were combined to form one cell and the number of teachers coded ‘two’ were combined to form the second cell.

Similar steps were applied to the task duration data (i.e., teachers were assigned codes of one and two, where code one referred, teachers displaying teaching behaviour,

‘below or equal to mean task duration’ (Table 5.9), and a code of two (2) referred to

‘above mean task duration’ (Table 5.9).

A Fisher’s exact test (two-sided) was performed to determine the association between the variables. In addition, Phi values were computed to determine the strength of association for 2 x 2 crosstabs, and Cramer’s V was computed to determine the strength of association for crosstabs larger than 2 x 2 (Morgan et al., 2012).

Association between the number and duration of teaching tasks.

The results of Fisher’s exact test displayed in Table 5.10 indicated no association between number and duration of teaching tasks, with the exception of questioning (p =

.00) and classroom environment (teacher-student interaction) (p = .02). The Phi values in Table 5.10 suggested strong relationships between number and duration of questioning tasks (.87) and classroom environment (teacher-student interaction) tasks

(.65).

143

Table 5.10

Fisher’s exact p-value and phi statistics for number and duration of teaching tasks (n=15) Teaching behaviour Number and duration of tasks Exact p-value Phi-value Orientation .09 .55 Structuring .24 .35 Modelling .56 .26 Practice 1.00 .04 Questioning .00 .87 CE (I) .02 .65 CE (CCG) 1.00 -.11 CE (MB) .07 1.00 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour).

Table 5.11 displays the cell frequencies for questioning. Examination of Table

5.11 shows almost two-thirds of the teachers (60%) asked four or fewer questions, taking a maximum of 1.4 minutes, one teacher (7%) asked four or fewer questions that took more than 1.4 minutes, and one-third (33%) asked more than four questions, which took more than 1.4 minutes. Given that questioning includes teacher wait time and

Table 5.11 Cell frequencies for number and duration of questioning tasks Task duration Questioning Total 1 2 Task number 1 Teachers 9 0 9 % within task number 100% 0% 100% % of Total 60% 0% 60% 2 Teachers 1 5 6 % within task number 17% 83% 100% % of Total 7% 33% 40% Total Teachers 10 5 15 % of Total 67% 33% 100% Notes. For task number, 1 refers to ≤ 4 questions, 2 refers to > 4 questions, and for task duration 1 refers to ≤ 1.4 minutes, 2 refers to > 1.4 minutes

144

teacher feedback to the student following a response to a question this association is not surprising, but what is interesting is the minimal time (M = 1.4 minutes) teachers were observed using questioning behaviours. In addition, it is notable that for most teachers questioning (i.e., ask question, wait time and feedback) averaged 16 seconds, this may have been attributable to the type of questions teachers relied on which were mainly product questions (Evertson et al., 1980).

The cell frequencies for teacher-student interactions displayed in Table 5.12 indicate teacher-student interactions occurred at least four times, for an average of 4.7 minutes of lesson time in 53% of classrooms, and occurred more than four times and for a longer period of time in 47% of classrooms. While strong relationships between number and duration of teacher-student interactions are reasonable, it is also likely these results indicate differences between teachers in terms of the importance attached to different teaching behaviours, and therefore are reflective of different teaching approaches adopted by individual teachers.

Table 5.12 Cell frequencies for number and duration of CE(I)tasks Task duration CE(I) Total 1 2 Task number 1 Teachers 8 3 11 % within CE (I) task 73% 27% 100% %number of Total 53% 20% 73% 2 Teachers 0 4 4 % within CE (I) task 0% 100% 100% %number of Total 0.0% 27% 27% Total Teachers 8 7 15 % of Total 53% 47% 100% Notes. CE (I) refers to classroom environment (interaction); For task number, 1 refers to ≤ 4 interactions, 2 refers to > 4 interactions; For task duration, 1 refers to ≤ 4.7 minutes, 2 refers to > 4.7 minutes.

145

.Association between mathematics content, and number and duration of teaching tasks.

Table 5.13 shows an association between mathematical content and the number of structuring tasks (p = .03), the number (p = .02) and duration of questioning tasks (p =

.00), and the duration of orientation tasks (p = .04). The Cramer’s V result in Table 5.13 indicates strong relationships between mathematical content and the number of structuring tasks (.73), number (.76) and duration (.89) of questioning tasks, and duration of orientation tasks (.74).

Table 5.13 Fisher’s exact p-value and Cramer’s V statistics for number and duration of tasks and mathematical content of lesson Teaching behaviour Task number and content Task duration and content Exact p-value Cramer’s Exact p-value Cramer’s V value V value Orientation .73 .33 .04 .74 Structuring .03 .73 1.00 .31 Modelling .82 .37 .35 .54 Practice .82 .37 .28 .57 Questioning .02 .76 .00 .89 CE (I) .32 .49 .71 .38 CE (CCG) .47 .53 1.00 .37 CE (MB) .47 .53 .47 .53 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour).

The cell frequencies displayed in Table 5.14 show 88% of teachers who taught algebra were observed to engage in the average (M = 2.2) number of structuring tasks, and all teachers who taught statistics were observed to engage in more than the average

(M = 2.2) number of structuring tasks.

146

Table 5.14 Cell frequencies for mathematical content and number of structuring task s Task number Structuring Total 1 2 Content Algebra Teachers 7 1 8 % within Algebra 88% 12% 100% % of Total 46% 7% 53% Geometry Teachers 2 1 3 % within Geometry 67% 33% 100.0% % of Total 13% 7% 20.0% Trigonometry Teachers 1 0 1 % within Trigonometry 100% 0% 100.0% % of Total 7% 0% 6.7% Statistics Teachers 0 3 3 % within Statistics 0% 100% 100.0% % of Total 0% 20% 20.0% Total Teachers 10 5 15 % within all contents 67% 33% 100.0% Note. For task number, 1 refers to ≤ 2.2 structuring tasks, 2 refers to > 2.2 structuring tasks.

Table 5.15 and Table 5.16 display results of cross tabulations for mathematical content, number, and duration of questioning tasks. Examination of the cell frequencies in Table 5.15 and Table 5.16 indicate that most teachers (88%) who taught algebra used an average of 3.7 questioning tasks, which took an average of 1.4 minutes of lesson time. However, all teachers who taught geometry used more questioning tasks taking up more time, than the average time taken by teachers who taught algebra.

147

Table 5.15 Cell frequencies for mathematical content and number of questioning tasks Task number Questioning Total 1 2 Content Algebra Teachers 7 1 8 % within Algebra 88% 12% 100% % of Total 46% 7% 53% Geometry Teachers 0 3 3 % within Geometry 0% 100% 100% % of Total 0% 20% 20% Trigonometry Teachers 1 0 1 % within 100% 0% 100% %Trigonometry of Total 7% 0% 7% Statistics Teachers 1 2 3 % within Statistics 33% 67% 100% % of Total 7% 13% 20% Total Teachers 9 6 15 % within all contents 60% 40% 100% Note. For questioning task number, 1 refers to ≤ 4 questioning tasks, 2 refers to > 4 questioning tasks.

Table 5.16 Cell frequencies for mathematical content and duration of questioning tasks Task duration Questioning Total 1 2 Content Algebra Teachers 8 0 8 % within Algebra 100% 0% 100% % of Total 53% 0% 53% Geometry Teachers 0 3 3 % within Geometry 0% 100% 100% % of Total 0% 20% 20% Trigonometry Teachers 1 0 1 % within 100% 0% 100% %Trigonometry of Total 7% 0% 7% Statistics Teachers 1 2 3 % within Statistics 33% 67% 100% % of Total 7% 13% 20% Total Teachers 10 5 15 % within all contents 67% 33% 100% Note. For questioning task duration, 1 refers to ≤ 1.4 minutes, 2 refers to > 1.4 minutes.

148

The cross tabulations for mathematical content and duration of orientation tasks

(see Table 5.17) suggest three-quarters of teachers (75%) orientation tasks (for algebra), all teachers’ orientation tasks (for trigonometry and statistics) took a maximum of 4.5 minutes, and in contrast, one-quarter of teachers (25%) orientation tasks (for algebra) and all teachers’ orientation tasks (for geometry) took longer than the maximum of 4.5 minutes recorded for algebra.

Table 5.17 Cell frequencies for mathematical content and duration of orientation tasks Task duration Orientation Total 1 2 Content Algebra Teachers 6 2 8 % within Algebra 75% 25% 100% % of Total 40% 13% 53% Geometry Teachers 0 3 3 % within Geometry 0% 100% 100% % of Total 0% 20% 20% Trigonometry Teachers 1 0 1 % within 100% 0% 100% %Trigonometry of Total 7% 0% 7% Statistics Teachers 3 0 3 % within Statistics 100% 0% 100% % of Total 20% 0% 20% Total Teachers 10 5 15 % within all content 67% 33% 100% Note. For orientation task duration, 1 refers to ≤ 4.5 minutes, 2 refers to > 4.5 minutes.

Although, these results suggest mathematical content (i.e., algebra or geometry) likely determined the extent to which teachers engaged in orientation tasks, the types of questions asked, and therefore, the type of instruction (i.e., basic skills or complex) approach, a note of caution is due here, as other factors, such as student ability, not considered in the analysis may have contributed to this result.

149

Association between teachers, and number and duration of teaching tasks.

The results of Fisher’s exact test are shown in Table 5.18, and these indicate there was no significant relationship between teacher and the number and duration of teaching tasks. Therefore, the frequency and duration of teaching tasks were independent of teachers.

Table 5.18 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration of teaching tasks among teachers Teaching behaviour Task number and teachers Task duration and teachers Exact p-value Cramer’s V Exact p-value Cramer’s value V value Orientation 1.00 1.00 1.00 1.00 Structuring 1.00 1.00 1.00 1.00 Modelling 1.00 1.00 1.00 1.00 Practice 1.00 1.00 1.00 1.00 Questioning 1.00 1.00 1.00 1.00 CE (I) 1.00 1.00 1.00 1.00 CE (CCG) 1.00 1.00 1.00 1.00 CE (MB) 1.00 1.00 1.00 1.00 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour).

Association between school, and number and duration of teaching tasks.

The results in Table 5.19 show no association between schools and the number and duration of teaching tasks, except for the number of modelling tasks (p = .01) and the duration of structuring tasks (p = .02). The Cramer’s V results in Table 5.19 indicate strong relationships between school, modelling tasks and structuring tasks (1.00).

Examination of cell frequencies in Table D6.1 (see Appendix D6) suggests all teachers

(i.e., 3 out of 3) who were observed at Aruna (ranked at 15 in terms of the value added analysis (VAA) (in phase one) engaged in the average (M = 4.2) number of modelling behaviours, compared to all teachers (i.e., 3 out of 3) teachers from Tasin (ranked at 2)

150

who engaged in modelling behaviours more often, and also were observed to use structuring behaviours which took more than the average amount of time (M = 1.9 minutes) among these teachers (Table D6.2). Although, this is not strong evidence it possibly indicates differences in approaches between schools.

Table 5.19 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration of teaching tasks among schools Teaching behaviour Task number and schools Task duration and schools Exact p-value Cramer’s V Exact p-value Cramer’s V value value Orientation 1.00 .48 .87 .63 Structuring 1.00 .50 .02 1.00 Modelling .01 1.00 .29 .80 Practice .59 .74 1.00 .58 Questioning 1.00 .55 1.00 .50 CE (I) .96 .56 .71 .68 CE (CCG) 1.00 .54 .74 .57 CE (MB) 1.00 .54 1.00 .54 Notes. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment(misbehaviour).

Association between teaching experience, and number and duration of teaching tasks.

The Fisher’s exact test result in Table D6.3 (see Appendix D6) shows that no significant relationship exists between experience in mathematics teaching (in years) and the number and duration of orientation, structuring, practice, modelling, questioning and classroom environment behaviour.

Association between teaching qualifications, and number and duration of teaching tasks.

Prior to conducting the Fisher’s exact test for measuring the association between teacher academic qualification (formal educational degree completed) and the frequency

151

of teaching behaviour, teacher qualifications were coded, where code 1 was assigned to undergraduate degree (i.e. Bachelor of Science [BSc], the eligibility criteria to be a secondary school mathematics teacher), and code 2 assigned to academic qualifications in addition to the BSc. The Fisher’s exact test displayed in Table D6.4 (see Appendix

D6) shows that teacher academic qualifications are not significantly related to number and duration of teaching behaviours.

Association between teacher self-efficacy beliefs, and number and duration of teaching tasks.

The Fisher’ exact test result in Table D6.5 (see Appendix D6) shows teacher self- efficacy beliefs with regard to orientation, structuring, modelling, practice, questioning and classroom learning environment in teaching mathematics at the secondary level are not significantly related to the frequency (i.e. task number and duration) of the associated teaching behaviours. The findings suggest that self-efficacy beliefs related to the eight teaching behaviours (orientation, structuring, modelling, practicing, questioning, classroom environment [interaction], classroom environment [competition, cooperation, group work] and classroom environment [misbehaviour]) had no relationship with the quantitative characteristics of the teaching behaviours.

Determining the qualitative characteristics of teaching behaviours.

The qualitative characteristics of teaching were determined by measuring: (1) the stage at which activities took place, (2) the focus of activities including, specificity and purpose, (3) the quality and clarity of activities for students, and (4) the extent to which activities were differentiated to meet student needs.

Development and application of coding sequences.

Codes based on Creemers & Kyriakides (2012) recommendations were adopted

(see Appendix D3). The coding sequences for the stage dimension of orientation,

152

structuring, modelling, practice and questioning are shown in Table D3.1 and Table

D3.4 (see Appendix D3) for the classroom environment. It can be seen from Table D3.1 and Table D3.4 that stage was measured in the same way for all teaching behaviours, as activities may take place during different parts of a lesson e.g., introduction, core, end or at multiple times in a lesson.

Table D3.2 displays the codes and descriptors used for determining the focus of orientation, structuring, modelling, practice and questioning behaviours. Table D3.2 indicates the focus of orientation, questioning and practices activities may be task related, linked to the lesson, linked to a unit or linked to all three. Similarly, structuring activities may be related to a previous lesson, the present lesson or unit related and modelling activities may be used in one lesson or unit or across several units (see Table

D3.2). The codes used to determine the focus of classroom environment activities are shown in Table D3.5. From Table D3.5, the focus of classroom environment (CE) activities is related to the purpose of the interaction. Additionally, the focus of CE

(misbehaviour) is determined by nature of student misbehaviour (see Table D3.5 in

Appendix D3).

Table D3.3 displays the coding with descriptors for the determining the quality of orientation, structuring, modelling, practice and questioning and Table D3.6 shows the codes for the classroom environment. It can be seen from Table D3.3 and Table D3.6 that different measures are used to determine the quality of teaching behaviour. For example, the quality of orientation activities is measured by whether a teacher simply introduces the topic in a lesson, provides a rationale for learning the topic or encourages students to identify reasons or uses a combination of the two approaches. The quality of structuring activities refers to the clarity of lesson structure and is measured by correct student responses related to structuring (see Table D3.3 in Appendix D3).

153

From Table D3.3, three aspects of modelling determined quality. The first refers to the teacher’s role in providing a problem-solving strategy to students, where the teacher may give strategy to students, guide or direct students to solve the problem. The second aspect refers to the phase of the lesson when the strategy was provided to students: the strategy may be provided ‘before’ solving the problem, or ‘after’ students attempt to solve a problem. The final aspect was the appropriateness of modelling; this was determined by student success in using the problem-solving strategy to solve a given problem.

With regard to practice, Table D3.3 shows that quality was determined by the level of complexity of the problem to be solved, which may similar to the example that has been modelled or more complex.

Table D3.3 shows the quality of questioning was determined by three aspects. The first aspect was the type of question (i.e. product/ process). The second aspect was teacher response to unanswered questions and included restating the question in easier words, asking an easier question, moving to another student or going to the next question, giving the answer her/himself, and may have mixed response. The third aspect was the feedback pattern, specifically to the answer given by a student and to the student about the answer. In providing feedback to a student answer: the teacher may make no comment, may comment to specify whether the answer is correct or incorrect, may invite other students to comment on the given answer, or may comment as well invite other students to comment (Table D3.3). In providing feedback to students about the answer, teachers may have no comment, may make negative comments to an incorrect answer, may provide positive comments for the right answer and make constructive comments for incorrect and partly correct answers (see Table 3.3).

Table D3.6 shows the quality of teacher-student interaction activities (i.e. CE [I],

154

CE [CCG]), no interaction and interruptions were measured by identifying the extent to which such interaction or no interaction were able to keep students on-task. In addition,

Table D3.6 indicates the quality of CE (MB) was measured by the way a teacher responds to student misbehaviour: may deliberately ignore the misbehaviour, may use a strategy that does not solve the problem, and may use a strategy that solves the misbehaviour.

Table D3.7 displays the coding sequences for the differentiation dimension of orientation, structuring, modelling, practice, questioning and classroom environment. It can be seen from Table D3.7 that differentiation is measured in a similar way for all teaching behaviours that is by observing the extent to which teaching behaviours are differentiated to students learning needs.

The results of applying the coding sequences for the stage, focus, quality and differentiation dimensions of teaching behaviours to teacher observations are reported in

Tables 5.20, 5.21, 5.22 and Tables D4.1, D4.2 and D4.3 of Appendix D4. It should be noted that the stage dimension of structuring (Table 5.20) has been reported excluding data on ‘transitions’ (see p.137 for rationale). The tables display the raw coded data analysed to determine the qualitative characteristics of teaching behaviours.

Descriptive statistics.

Descriptive statistics, in particular, the mode for the stage, focus, and quality dimensions of teaching behaviours were calculated in SPSS 22.0. It should be noted that the mode for the differentiation dimension was not calculated as this dimension was not observed in any of the teaching behaviours (see Table 5.20). This finding is consistent with Edwards et al. (2010) who observed classroom teaching in two primary schools in

Bangladesh, and reported a lack of differentiated teaching practices and the prevalence of whole class instructional practices. A possible speculative explanation for the lack

155

Table 5.20

Stage, focus and differentiation dimensions of orientation, structuring, modelling, practice and questioning behaviours Teacher Orientation Structuring Modelling Practice Questioning Stage Focus Diff. Stage Focus Diff. Stage Focus Diff. Stage Focus Diff. Stage Focus Diff.

Shapla 4 2 0 4 2 0 4 2 0 2 1 0 0 0 0 Moni 4 4 0 1 3 0 4 3 0 0 0 0 0 0 0 Nila 4 1 0 4 4 0 4 3 0 3 1 0 4 1 0 Adnan 4 4 0 4 4 0 4 2 0 3 2 0 0 0 0 Saddam 4 4 0 1 2 0 2 3 0 3 2 0 4 1 0 Angel 4 4 0 1 1 0 4 3 0 3 2 0 0 0 0 Momota 4 4 0 4 4 0 3 2 0 3 2 0 4 1 0 Bela 1 2 0 1 1 0 4 2 0 4 4 0 0 0 0 Hazzaz 4 4 0 4 4 0 4 2 0 3 2 0 4 1 0 Antu 4 4 0 1 2 0 4 2 0 2 2 0 3 1 0 Bindu 1 2 0 3 2 0 4 3 0 4 4 0 1 1 0 Babu 4 4 0 4 4 0 4 2 0 3 2 0 0 0 0 Priya 4 2 0 1 2 0 4 2 0 3 2 0 4 1 0 Kazol 1 2 0 0 0 0 4 2 0 4 4 0 4 4 0 Shilpo 2 2 0 4 4 0 4 2 0 3 1 0 4 1 0 Notes. Diff. refers to differentiation. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed.

156

Table 5.21

Quality dimension of orientation, structuring, modelling, practice and questioning behaviours Teacher Orientation Structuring Modelling Practice Questioning Teacher Phase Appropriate Question Response Answer Student

role type no answer feedback feedback Shapla 2 3 1 1 1 1 0 0 0 0 Moni 4 1 4 3 1 0 0 0 0 0 Nila 2 1 4 3 1 1 3 6 4 4 Adnan 4 1 4 1 1 2 0 0 0 0 Saddam 4 1 1 1 1 2 1 3 2 4 Angel 4 1 1 2 2 1 0 0 0 0 Momota 4 1 1 2 1 2 1 5 4 4 Bela 1 1 1 2 3 1 0 0 0 0 Hazzaz 4 1 1 1 1 1 1 0 4 2 Antu 2 1 1 2 1 1 1 0 2 4 Bindu 2 1 4 3 1 2 1 0 2 2 Babu 4 1 1 3 3 2 0 0 0 0 Priya 4 1 4 1 1 2 3 3 4 4 Kazol 3 0 4 3 1 3 1 0 2 4 Shilpo 2 3 1 1 1 1 1 0 2 2 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed

157

Table 5.22

Stage, focus and quality dimensions of classroom as a learning environment (teacher initiated) Teacher CE(I) Teacher initiated Lecture/present Give instructions Make comments Problems/questions Organisation Monitor Social interaction St. Foc. Qual. St. Foc. Qual. St. Foc. Qual. St. Foc. Qual. St. Foc. Qual. St. Foc. Qual. St. Foc. Qual. Shapla 4 2 1 4 2 1 4 2 1 4 2 1 4 2 1 2 2 3 4 4 2 Moni 4 2 1 4 2 1 0 0 0 4 2 1 4 2 1 0 0 0 4 4 2 Nila 4 2 1 4 2 1 4 2 1 4 2 1 1 2 3 0 0 0 4 4 2 Adnan 4 2 1 0 0 0 3 2 1 4 2 1 4 2 3 4 2 3 4 4 2 Saddam 4 2 1 4 2 1 4 2 3 4 2 1 1 5 3 2 5 3 4 4 2 Angel 4 2 1 4 2 1 1 2 1 4 2 1 4 2 3 2 5 1 4 4 3 Momota 4 2 3 3 2 1 4 2 1 4 2 1 4 2 1 3 2 1 4 4 2 Bela 4 2 1 3 2 1 0 0 0 4 2 1 4 2 3 4 2 3 1 4 2 Hazzaz 4 2 1 3 2 1 1 2 1 4 2 1 4 2 1 3 2 1 4 4 2 Antu 4 2 1 3 2 1 4 2 1 4 2 1 4 2 1 3 2 3 4 4 2 Bindu 4 2 1 3 2 3 4 2 1 4 2 1 4 2 3 4 2 3 3 4 2 Babu 4 2 1 3 2 1 1 2 1 4 2 1 4 2 3 2 2 1 4 4 2 Priya 4 2 1 4 2 1 4 2 3 4 2 1 4 2 1 3 2 1 4 4 3 Kazol 4 2 1 4 2 1 4 2 1 4 2 1 4 2 1 4 2 1 4 4 2 Shilpo 4 2 1 3 2 1 0 0 0 2 2 1 4 2 3 3 2 3 4 4 2 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. St. refers to stage, Foc. refers to focus, Qual. refers to quality,

158

of differentiation is related to the cultural context, in that, in Bangladesh, it is considered unfair if a teacher responds differently or differentiates activities for students in the classroom.

Tables 5.23 displays the number of categories and the mode statistics for the stage and focus dimensions of orientation, structuring, modelling, practice and questioning behaviours. In terms of the stage dimension, the modes in Table 5.23 suggest orientation, structuring, modelling and questioning activities were mostly employed by teachers at multiple stages of the lesson, with the exception of practice activities which mostly occurred at the end of the lesson.

Table 5.23

Descriptive statistics for stage and focus dimensions of orientation, structuring, modelling, practice and questioning Teaching Behaviour Stage Focus Categories Mode Categories Mode (number) (number) Orientation 4 4 4 4 Structuring 4 4 4 4 Modelling 4 4 3 2 Practice 4 3 4 2 Questioning 4 4 4 1 Note. See Appendix D3 for coding and category descriptors.

With respect to the focus dimension, the modes in Table 5.23 indicate orientation and structuring activities were designed to help students understand how task, lesson, and units were related; while modelling activities were unit related, practice activities were lesson related, and questioning activities were mostly task related. Such mode patterns suggest that most of these teachers use a three-part lesson structure (Muijs &

Reynolds, 2005), consistent with a direct instruction approach (DfEE, 1999; Good et al.,

1983). This means the first part of the lesson is devoted to whole-class instruction, followed by guided practice sessions and individual practice activities to reinforce learning (Muijs & Reynolds, 2005).

159

Table 5.24 displays the number of categories and the modes which were computed for the quality dimensions of orientation, structuring, modelling, practice and questioning behaviours. It is apparent from Table 5.24 that orientation activities were related to learning and conducted to ensure students were able to specify the learning/task goal and activities were structured to enable student understanding.

Further, modelling mainly consisted of teaching students a problem-solving strategy after a problem has been presented and this was observed to be successful (see Table

5.24).

Table 5.24

Descriptive statistics for quality dimension of orientation, structuring, modelling, practice and questioning Teaching Specific Behaviour Categories Mode Behaviour (number) Orientation 4 4 Structuring 3 1 Modelling Teacher role 4 1 Lesson phase 3 1 Appropriate 3 1 Practice 3 1 Questioning Question type 3 1 Teacher response - no answer 6 3 Teacher response - answer feedback 4 2 Teacher response - student feedback 4 4 Note. See Appendix D.3 for coding and category descriptors.

Additionally, most practice activities were similar to the problem modelled by the teacher and were provided to facilitate cognitive processes of students, and Table 5.24 suggests teachers mainly used simple questions. The modes for the quality dimension of questioning behaviours (Table 5.24) indicate if a student did not provide an answer to a question, then a teacher would move to the next student, and if a student did respond with an answer, a teacher would respond with either a negative, positive or constructive comment depending on the student answer.

160

Table 5.25 indicates the number of categories and the modes for the stage, focus and quality dimensions of the classroom as a learning environment (or the contributions from the teacher). Specifically, the modes for teacher-student interactions that are, teacher-initiated, student- initiated; student treatment by the teacher to encourage fair competition, cooperation, and group work and disorder; and behaviour that is neither initiated by teacher or student, including interruptions from external sources and pauses, silence or periods of confusion in which behaviour could not be categorised are shown in Table 5.25.

In terms of the stage dimension, it is can be seen from Table 5.25 that teacher and student-initiated interactions occurred at multiple stages, the exceptions are monitoring, conducted at the end, student questions which either arose at the beginning or during the core stage of the lesson, and encouragement of fair competition, cooperation, and group work among students. It is suggested that monitoring is more likely to occur when students are engaged in practice activities, which according to Table 5.25 were observed most commonly at the end of the lesson.

Table 5.25 shows the focus of teacher and student-initiated interaction, encouragement of fair competition, cooperation, and group work, dealing with misbehaviour, and behaviour neither initiated by teacher or student, was to facilitate student learning, and only when teachers interacted socially were multiple purposes

(e.g., learning, managerial, social) served by this type of behaviour. It can be seen from the modes in Table 5.25 that the immediate effect of the quality of teacher-initiated interactions (i.e., presentation, instruction, comment and questioning) and student- initiated interactions ensured on-task student behaviour, while teacher-initiated interactions (i.e., organising learning material, monitoring and social interaction), either did not contribute or had mixed effects with regard to on-task student behaviour.

161

Table 5.25

Descriptive statistics for stage, focus and quality dimensions of classroom as a learning environment Stage Focus Quality Teaching Behaviour Specific Behaviour Categories Mode Categories Mode Categories Mode (number) (number) (number) CE (I) Presentation 4 4 5 2 3 1 Teacher-initiated Instruction 4 4 5 2 3 1 Comment 4 4 5 2 3 1 Pose problems and/or questions 4 4 5 2 3 1 Organise learning material 4 4 5 2 3 3 Monitor 4 3 5 2 3 3 Social interaction 4 4 5 4 3 2 CE (I) Give answer 4 4 5 2 3 1 Student-initiated Ask content question 4 2 5 2 3 1 Spontaneous speech 4 1 5 2 3 1 Collaboration student-initiated 4 4 5 2 3 1 Collaboration teacher prompted 4 4 5 2 3 1 Works on own 4 4 5 2 3 1 CE (CCG) Fair competition, cooperation, group 4 2 5 2 3 2 CE (MB) Studentwork misbehaviour 4 4 2 1 n/a n/a Deals with misbehaviour 4 4 5 2 3 1 No interaction Organise learning material 4 4 5 2 3 3 Other activities 4 4 5 1 3 1 Monitor 4 4 5 2 3 1 Interruptions External 4 4 5 n/a 3 2 Silence 4 4 5 5 3 2 Notes. See Appendix D3 for coding and category descriptors. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour).

162

Frequency distributions.

To determine the extent to which the mode of the stage, focus, and quality dimensions of orientation, structuring, modelling, practice, questioning and classroom as a learning environment behaviours were prevalent among teachers, frequency tables were generated. The results are shown in Table D5.3, D5.4, D5.5, D5.6, D5.7, and D5.8 in Appendix D5.

Orientation.

This refers to the activities expected to help students understand the importance of learning activities (Creemers & Kyriakides, 2006) was utilized by all teachers and 73% of teachers used orientation tasks at multiple stages in the lesson, while 20% used orientation task at the start of lesson (see Table D5.4 in Appendix D5). It can be seen from Table D5.4 that orientation activities for some teachers (40%) were task related, and for other teachers (53%) were task, lesson, and unit related. Table D5.5 suggests that most teachers (53%) orientation activities outlined the reasons for student participation in the task or lesson, and most students (53%) in these classes were able to articulate reasons for participating in the task or lesson.

Structuring.

This involves the review of lesson goals, emphasising important parts of the lesson, signalling transitions between different parts of the lesson and summing up at the end of the lesson (Vanlaar et al., 2016). Table D5.3 (in Appendix D5) indicates a little under half of the teachers (47%) engaged in structuring activities on multiple occasions in the lesson, while 40% displayed structuring behaviours at the beginning of the lesson. Table D5.4 shows the focus of structuring activities for some teachers (33%) was unit related, while for others (40%) they were related to previous and present lessons and linked to the current unit of work. In terms of quality, Table D5.5 indicates

163

teacher structuring activities were mostly clear to students (80%) based on the observations that students were able to respond correctly to teacher questions. This finding suggests that teacher structuring activities were important in contributing to student learning and is consistent with previous research evidence (Brophy & Good,

1986; Rosenshine & Stevens, 1986).

Modelling.

This is associated with teaching student higher order thinking skills in particular problem solving (Creemers & Kyriakides, 2012). Table D5.3 (see Appendix D5) shows most teachers (86%) in this study engaged in modelling at multiple stages in the lesson.

The focus of teacher modelling behaviour was mostly related to a current unit of work

(67%), but for a smaller number of teachers (33%) the focus of modelling was across units (see Table D5.4). With respect to quality of modelling, Table D5.5 shows that most teachers (53%) explained problem-solving strategies to students, while some teachers (47%) guided students to develop their own strategy based on the teacher modelled problem-solving strategy. Further, Table D5.5 suggests most of these teachers (40%) provided a problem-solving strategy after students had faced difficulties in solving problems. Moreover, Table D5.5 indicates that 80% of teacher modelling activities were appropriate as students were able to solve the problem successfully and/or answer teacher questions correctly.

Practice.

This refers to the opportunities provided to students to practice new learning

(Creemers & Kyriakides, 2012). Table D5.3 (see Appendix D5) shows that most teachers (60%) provided individual work and/or group work for students as practice tasks at the end of the lesson, and 20% engaged students in practice tasks at both core stage and end of the lesson. From Table D5.4 it can be seen that 53% of practice tasks

164

were lesson related, and 20% were task, lesson, and unit related. Table D5.5 indicates

47% of practice activities emphasised simple tasks and 47% involved complex tasks. In addition, most practice activities involved individual seatwork (47%) or group work

(53%) (this is not shown in Table D5.5, Appendix D5).

Questioning.

This is the most widely used form of teacher-student interaction (Muijs &

Reynolds, 2005). The purpose of questioning is to involve students in the class discussion and check students understanding of the lesson (Creemers & Kyriakides,

2012). Table D5.3 shows that 40% of teachers did not employ questioning and the 47% who did ask questions, did so at more than one stage of the lesson with questions that were mostly (53%) task related (see Table D5.4 in Appendix D5)). In terms of the quality of questioning, most teachers (47%) asked product or simple questions and only a small number of teachers (13%) used both product and process questions. Further analysis of the results in Table D5.5 suggests the ratio of teachers using product questions to those using process questions was 9:2. In addition, it can be seen in Table

D5.5 that half of the teachers (50%) responded by moving to the next student when a question was unanswered, 25% moved to the next question and 25% either answered it or restated the question. When a student answered a question, most teachers, either commented by acknowledging whether the answer was either correct or partly correct or incorrect, and 44% of teachers invited other students to make comments on the answer.

Table D5.5 shows most teacher comments (67%) were mixed, with positive teacher comments for the right answer and constructive teacher comments other types of answers.

A possible explanation is that teachers use practices (e.g., presentation, explaining and structuring) associated with the direct teaching approach leaving little time for

165

questioning. The results with regard to questioning suggest possible weaknesses in teacher questioning skills. Teacher questioning behaviour is an important contributor to student achievement and learning (Gall et al., 1978; Mortimore et al., 1988; Soar &

Soar, 1979; Stallings & Kaskowitz, 1974). Effective teachers ask many process type questions which elicit deeper thinking and more detailed responses and enable them to give formative feedback to the student so s/he can correct the answer (Brophy & Good,

1986; Creemers & Kyriakides, 2006; 2012; Leven & Long, 1981). It is suggested that the low frequency and duration of questioning behaviour may indicate teachers in this study, most likely used product questions (simple response) and provided little formative feedback to students.

CE (Teacher-initiated) interaction.

Table D5.6 (see Appendix D5) shows CE (teacher-initiated) interaction occurred at multiple stages in the lesson, with the exception of monitoring which occurred for most teachers (33%) at the end of the lesson. The purpose of CE (teacher-initiated) interaction was mainly to facilitate student learning by creating a friendly learning environment (see Table D5.7). Generally, Table D5.8 shows that the effect of CE

(teacher-initiated) interaction was to keep students on-task (67%), but organisation of material (53%), monitoring of student work (47%) and efforts to interact socially (13%) had mixed effects. Possible explanation for mixed results is that organisation of materials (e.g., handing out materials) is a non-instructional activity, if teachers have not established rules about what students are to do when this event occurs it is likely they will become off-task (Emmer & Evertson, 2012). A possible reason for mixed results in regard to monitoring of student work is that some students may have completed work early and become off-task, and in the case of mixed results for social interaction, most teachers were observed to interact socially with students before and after of the lesson

166

for a non-learning purpose (i.e., facilitating friendly relationship with students), and in this situation students may not have been expected to be on-task.

CE (Student-initiated) interaction.

Table D5.6 shows that with the exceptions of asking content questions and spontaneous speech, most other types of student-initiated interaction (i.e. give answer to question [53%], collaboration [27%], collaboration teacher prompted [20%], and work own [67%]) were observed at multiple stages of the lesson. This suggests that not all teachers (53%) were able to create interactive and collaborative learning environments, which is rather concerning given that the main purpose of student-initiated interaction was to facilitate student learning (see Table D5.7). Interestingly, more on-task student behaviours were observed in classes where teachers encouraged fair competition, cooperation, and group work compared to classrooms where students were encouraged to work on their own (see Table D5.8 in Appendix D5).

CE (Competition, cooperation, group work).

Table D5.6 (see Appendix D5) indicates that 40% of teachers encouraged fair competition, cooperation, and group work during the core part of the lesson and the purpose was to facilitate student learning (see Table D5.7 in Appendix D5). Although one-third of the students (33%) were observed to be off-task in classes where teachers encouraged fair competition, cooperation and group work this was because the students’ were following the teacher’s instruction with regard seating and room re-arrangement for group work (see Table D5.8).

CE (Misbehaviour).

Table D5.6 (see Appendix D5) shows that student misbehaviour was observed in one classroom (7%) and while it occurred during multiple stages of the lesson, it was incidental in nature (see Table D5.7). Therefore, the teacher was observed to deal with

167

the misbehaviour by ignoring it deliberately (see Table D5.8) to minimise interference with the learning tasks in the classroom (see Table D5.7).

No interaction.

Table D5.6 (see Appendix D5) shows most teachers (60%) organised content or learning material and monitored student work without any interaction, at multiple stages of the lesson. Table D5.7 shows that the main focus of organising material was to foster learning, management and create a friendly classroom environment (73%), while facilitation of learning was the focus of monitoring activities, for most teachers (93%).

In terms of quality, Table D5.8 (see Appendix D5) shows 53% of teachers who organised learning material and did not interact with the students’ were unable to maintain student on-task behaviour. Interestingly, two-thirds of teachers (67%) who engaged in other activities without student interaction maintained student on-task behaviour. Similarly, Table D5.8 indicates that 80% of teachers were able to keep students on-task by monitoring despite a lack of interaction with students.

Interruption and /or silence.

It is apparent from Table D5.6 (see Appendix D5) that interruptions were observed in 27% of classrooms, and 20% of these occurred at multiple stages in the lesson from external sources. Table D5.7 shows that 73% had nothing to do with learning, management or creating positive learning environments, and all interruptions contributed to off-task student behaviour (see Table D5.8). Although, minimal interruptions were observed they clearly wasted lesson time and had adverse effects on student learning. Table D5.6 shows that ‘silence’ was observed in two classrooms

(14%), at the end in one classroom and during multiple stages of the lesson in the other classroom. Although, the purpose for silence was observed to be associated with learning, students in these classrooms were not on-task (see Table D5.8).

168

Modifications to developmental levels of teaching skill.

Kyriakides et al. (2009) identified five developmental levels of teaching skill. As some of these teaching behaviours were not identified in this study, the developmental levels were modified to fit these data. Thus, time management and assessment behaviours were excluded, and two teaching behaviours; teacher-student relation and student-student relation identified by Kyriakides et al. (2009) were renamed to classroom environment (teacher-initiated) and classroom environment (student- initiated) respectively for the purpose of clarity. The modified five developmental levels of teaching skill are shown in Table D8.1, in Appendix D8. This table shows five developmental levels that correspond to the frequency, stage, focus, quality and differentiation dimensions of orientation, structuring, modelling, practice, questioning,

CE (teacher-initiated) interaction, and CE (student-initiated) interaction.

Determining the developmental levels of teaching skill.

A checklist of criteria displayed in Table D8.2 (see Appendix D8) was developed for the study to determine the developmental level of teaching skill as few guidelines existed with regard to classifying levels of teaching skill using categorical data generated through teacher observations. Previous studies (Antoniou & Kyriakides,

2013; Kyriakides et al., 2009) have used SEM analysis. The checklist of criteria in

Table D8.2 (see Appendix D8) was developed using the descriptions of the four dimensions (i.e., stage, focus, quality, differentiation) of orientation, structuring, practice, modelling, questioning and classroom learning environment behaviours from

Creemers & Kyriakides (2008), and the literature review in Chapter 3.

Table D8.2 shows the criterion used to determine the frequency of all teaching behaviours was an observation of activities associated with the behaviour at least once during the recorded lesson observations. The criterion for the stage dimension was

169

based on Creemers & Kyriakides (2006) assertion that teaching behaviour needs to take place over a long period of time to ensure continuous direct or indirect effects on student learning. Thus, Table D8.2 shows the criterion used for the stage dimension of all teaching behaviours (with the exception of structuring and practice) was an observation of teaching behaviours at multiple stages of the lesson. For structuring, the criterion was an observation of this teaching behaviour at any stage of the lesson with the exception of ‘transitions’ as this may be implied when structuring is observed at multiple stages of the lesson. According to the literature review in Chapter 3 (see p. 43), practice activities should occur following new learning (Rosenshine, 1986, 1987, 1995); therefore, the criterion for the stage dimension was evidence of practice at any stage in the lesson.

It should be noted that for CE (student-initiated) interaction (see Table D8.2) this criterion includes ‘student questions, spontaneous speech and collaboration’ at multiple stages of the lesson, but not ‘student answers or working on their own’. The reason for exclusion from the criterion is because these two interactions may be regarded as a response to the teacher rather than a contribution to the learning environment.

Examination of Table D8.2 shows the criterion for the focus and quality dimensions of each teaching behavior is different. The focus of practice and questioning activities is determined by the extent to which these activities were related to at least one task, whereas, the focus of orientation and modelling was determined by the extent to which these activities were related to the whole lesson. This qualitative difference in the criterion is based on Kyriakides et al. (2009) suggestions that practice and questioning are the components of direct teaching and are considered easier teaching behaviours, in contrast to orientation and modelling that are associated with constructivist teaching and are known to be more demanding of teachers. The same

170

reasoning was applied to the criterion for the quality dimension of all teaching behaviours. Thus, the quality of practice was determined by considering the complexity of the activity (in terms of product or process), and similarly, the quality of questioning was determined by question type (e.g., product or process). In terms of the quality of feedback associated with questioning, Table D8.2 shows the criterion adopted was an observation of a teacher commenting on an answer which was positive for the correct answer and constructive for incorrect or partly correct answers. The quality of structuring activities was determined by clarity for students, and the quality of orientation activities was determined by the extent to which students were able to specify the aim of the task or lesson (see Table D8.2). For modelling, quality was determined by the extent to which the teacher was observed to guide or direct students to discover a problem-solving strategy, and whether it led students to solve problems successfully.

Table D8.2 shows the criterion for the quality of CE (I) behaviours was the extent to which these activities established on-task student behaviour (Creemers & Kyriakides,

2008). Therefore, the criterion for the focus of CE (I) behaviours was that all teacher interactions with the exception of social interactions and student interactions (i.e. student questions, spontaneous speech and collaboration) facilitated student learning.

The criterion used to determine the differentiation dimension of all teaching behaviours was an observation of activities associated with teaching behaviours at least once during the whole lesson (see Table D8.2 in Appendix D8).

Teacher developmental levels of teaching skill.

The checklist of criteria was applied to the teacher observation data. Figure 5.3 shows the percentage of teachers at each developmental level of teaching skill based on the results of this analysis displayed in Table D8.3 (see Appendix D8).

171

Several patterns are evident in Figure 5.3. First, most teachers are skilled at level one or the basic elements of direct teaching and are able to effectively use daily teaching routines (Kyriakides et al., 2013). More specifically, most teachers at this level

(93%) display quantitative characteristics associated with structuring and practice behaviours which are typical of the direct teaching approach. It is apparent from Figure

5.3, that fewer teachers (60%) display the same level of skill in questioning behaviours.

Second, many teachers are proficient at level two, that is putting aspects of quality in direct teaching and touching on active teaching (see Figure 5.3). In particular, most teachers (93%) demonstrate the qualitative characteristics of structuring and practice behaviours associated with direct teaching, and all teachers promote positive classroom environments through interaction with students, characteristic of active teaching.

However, less than half of the teachers (47%) display stage and quality dimensions of questioning. This result is consistent with the pattern reported for level one teaching skills.

Third, Figure 5.3 reveals most teachers demonstrate the qualitative aspects of active teaching and reaching out (Kyriakides et al., 2013). More specifically, Figure 5.3 shows teachers are proficient in the quality dimension of structuring (80%), and stage and focus dimensions of creating effective learning environments by initiating and encouraging interactions (100%). Nonetheless, Figure 5.3 indicates that few teachers meet the criteria for focus and quality dimensions of questioning (47%), and stage and focus dimensions of classroom environments that encourage and support student- initiated interactions (60%). In addition, Figure 5.3 shows that all teachers include orientation and modelling behaviours, typical of new teaching approaches (Kyriakides et al., 2013) in their classroom teaching practices.

172

120

100 100 100 100 100 100 100

100

93 93 93 93 93 93

87 87

80

80

73

67

60 60 60

60

53

47 47 47 47 47

Percentage of tecahers 40 40

20

0 Level 1 Level 2 Level 3 Level 4 Level 5 Levels of teaching skill

Structuring Practice Questioning CE (TI) CE (SI) Orientation Modelling

Notes. 1. CE (TI) refers to classroom environment (teacher-student interaction) which is teacher initiated, CE (SI) refers to classroom environment (teacher-student interaction) which is student initiated. 2. in columns, solid colour ( ) refers to the ‘frequency’, checker board pattern ( ) refers to the ‘stage’, horizontal pattern ( ) refers to ‘focus’, and downward diagonal pattern ( ) refers to ‘quality’ skill of the teaching behaviour. 3. ‘Differentiation’ dimension is excluded.

Figure 5.3. Teacher developmental levels of teaching skill.

173

Fourth, it can be seen from Figure 5.3 that these teachers do not differentiate structuring and practice behaviours. Even so, most integrate the stage dimensions of orientation (73%) and modelling (87%) behaviour. This means teachers not only provide sufficient tasks associated with these behaviours, the tasks were given at appropriate stages in the lesson (Kyriakides et al., 2009).

Last, Figure 5.3 shows that many teachers are proficient in the focus dimensions of orientation (67%) and modelling (87%) and the quality dimensions of initiating teacher-student interactions to create effective learning environments (100%). Thus, teachers are able to use these teaching behaviours, which are associated with new teaching approaches, and also integrate them effectively into classroom teaching practices (Kyriakides et al., 2009). Additionally, Figure 5.3 indicates a smaller percentage of teachers are able to integrate quality dimensions of orientation (53%), modelling (47%) and CE (SI) (40%), however, none are proficient in differentiating these teaching behaviours.

Student questionnaires.

Data collection procedures.

The student questionnaire (omitted due to copyright) adapted from Creemers &

Kyriakides (2012) was translated into the native Bangla language using the back- translation technique (Douglas & Craig, 2007). The questionnaires were administered to students in the year nine mathematics class, immediately after each teacher observation.

The student questionnaire was designed to collect information about teaching behaviours identified in the dynamic model (Creemers & Kyriakides, 2012). Students were asked to indicate on a five-point Likert scale, ‘1’ never to ‘5’ almost always, the extent to which their teacher behaved in certain ways during the lesson.

The researcher adopted administrative guidelines suggested by Creemers &

174

Kyriakides (2012). This required several steps. The first step involved checking that each student had parental permission to participate and that s/he regularly attended the class. Applying these criteria, 63 students were excluded from completing the questionnaire and the response rate was 87%. In the second step, the researcher asked the teacher to leave the room, and then briefed the students with regard to the purpose of the questionnaire, confidentiality provisions, and provided guidance on how to complete the questionnaire. The students were encouraged to read the statements and to choose answers carefully to improve the quality of the data collection. The researcher remained in the classroom when students completed the questionnaire and answered any questions. It took 50 minutes for students to complete the questionnaire. A total of 439 students in 15 year nine mathematics classes from eight of the top-performing secondary schools in Dhaka region completed student questionnaires.

Preliminary data analysis.

The completed student questionnaires were coded and checked for errors, and a data file created in SPSS 22.0. Exploratory data analysis (Morgan et al., 2012) in SPSS

22.0 indicated the data met assumptions for confirmatory factor analysis, and missing data were eliminated by listwise deletion. The data file (with no missing data) was exported to Lisrel 9.1 for confirmatory factor analysis.

Confirmatory factor analysis.

Based on evidence and Kyriakides (2015) recommendations (Appendix A2) single factor measurement models were specified (see Table 5.26) in which items from the student questionnaire were hypothesised to load onto latent teaching behaviour factors. It should be noted a one-factor CFA model was not specified for orientation because one item loaded on this factor. Figure D7.1 to D7.9 (Appendix D7) depict the complete specification of the hypothesised one-factor CFA models. The models were

175

Table 5.26 Factor-item specification for hypothesised and one-factor CFA models of teaching behaviours Factor Items (hypothesized) Items ((final) Orientation 8 - Structuring 1,2,3,4,7,10,34 1,2,3,4,7,10,34 Practice 11,12,14,15 - Modelling 44,45,46,47 44,45,46,47 Questioning 24,25,38,39,40,41,42,43 24,38,40,42,43 Time management 31,22,35,36 - Assessment 5,6,9,48 - CE(TSI) 13,16,19,20,21,26,37 13,16,19,20,21,26,37 CE(SSI) 17,18,22,23 - CE(MB) 27,28,29,30,33 27,28,29,30,33 Note. CE (TSI) refers to classroom environment (teacher-student interaction), CE (SSI) refers to classroom environment (student-student interaction), and CE (MB) refers to classroom environment (misbehavior).

estimated with diagonally weighted least squares (DWLS) using Lisrel 9.1. DWLS estimation is recommended when data are ordinal (Joreskog & Sorbom, 2006). The goodness of fit was determined using diagnostic information provided by Lisrel 9.1 including, the χ2statistic, root mean square error of approximation (RMSEA) standardised root mean square residual (SRMR), comparative fit index (CFI), and parameter estimates. Model fit criteria recommended by Hu & Bentler (1999) which suggested acceptable model fit was defined by RMSEA (≤ .07), SRMR (≤ .06),

CFI (≥ .95) were applied. Multiple fit indices were used because they provide different information about the model (absolute fit, parsimony of fit, fit to null model), and collectively provide a more reliable evaluation of the solution (Brown, 2006).

The results displayed in Table 5.27 show the fit indices for the hypothesised one- factor CFA models of structuring, [2(14, N = 432) = 24.18, p > .05, CFI = 1.00,

RMSEA = .00, SRMR = .03], classroom environment (teacher-student interaction)

[2(14, N = 426) = 58.30**,  < .001, CFI = .98, RMSEA = .03, SRMR = .04],

176

modelling[2(2, N = 439) = 3.92, p > .05, CFI = 1.00, RMSEA = .00, SRMR = .02], and classroom environment (misbehaviour) [2(5, N = 432) = 11.92, p > .05, CFI =

.99, RMSEA = .03, SRMR = .03] fitted the data well.

Table 5.27

Fit indices for one-factor CFA models of teaching behaviours Factor Model Item df χ2 CFI RMSEA SRMR χ2diff

Structuring Hypothesized 14 24.18 1.00 .00 .03 -

CE(I) Hypothesized 14 58.30*** .98 .03 .04 -

Modelling Hypothesized 2 3.92 1.00 .00 .02 -

Questioning Hypothesized 20 192.96*** .96 .08 .07 1 41 14 174.17*** .97 .08 .07 18.79** 2 25 9 56.15*** .99 .05 .04 18.02** 3 39 5 10.26*** 1.00 .00 .02 45.89** CE(MB) Hypothesized 5 11.92 .99 .03 .03 - Time Hypothesized 2 8.26* .94 .07 .04 -

Practice Hypothesized 2 19.02*** .98 .08 .05 -

CE(SSI) Hypothesized 2 25.29*** .89 .12 .07 -

Assessment Hypothesized 2 .29 1.00 .00 .01 - Notes. CE (I) refers to classroom environment (teacher-student interaction), CE(SSI) refers to classroom environment (student-student interaction), CE(MB) refers to classroom environment(misbehaviour), df=degrees of freedom, χ2=chi-square statistic, RMSEA= root mean square error of approximation, SRMR=standardized root mean square residual, *p<.05, **p<.01, ***p<.001.

In contrast, the fit indices for questioning [2(20, N = 439) = 192.96, p < .001,

CFI = .96, RMSEA = .08, SRMR = .07], time management [2(2, N = 431) = 8.26, p < .05, CFI = .94, RMSEA = .07, SRMR = .04], and practice [2(2, N = 437) = 19.02, p < .001, CFI = .98, RMSEA = .08, SRMR = .05] indicated these models did not fit the data well.

The one-factor CFA models of questioning, time management and practice were re-specified based on diagnostic information provided in Lisrel 9.1 to improve model

177

fit, parsimony, and interpretability of the solutions (Brown, 2006). As guidance with respect to re-specification strategies is limited in the literature (Schumacker & Lomax,

2004), items were deleted rather than set free for the sake of parsimony (Holmes-Smith,

2001). Thus, the steps in model re-specification were to remove one item from the one- factor CFA models at a time, re-estimate the model in Lisrel 9.1, examine model fit indices and parameter estimates. Table 5.27 shows the fit indices for the re-specified questioning model. It can be seen from Table 5.27 that the fit indices after item 41was removed from the questioning model [2(14, N = 429) = 174.17**, p < .001, CFI = .97,

2 RMSEA = .08, SRMR = .07]. The  df (18.79, p < .01) suggested the alternative model was a significantly better fit, but model fit indices (Table 5.27) suggested poor model fit and the model was re-specified, with the exclusion of item 25. The fit indices [2(9, N

2 = 437) = 56.15**, p < .001, CFI = .99, RMSEA = .05, SRMR = .04] and the  df

(18.029, p < .01) suggested the alternate model was a significantly better fit for the data.

The one-factor CFA model of time management was re-specified by removing item 31, and similarly, the one-factor CFA model of practice was re-specified by removing item 12. However, re-specification of these models generated negative error variance and improper solutions. Thus, the one-factor CFA model of time management and practice were excluded from further analysis.

The hypothesised one-factor CFA model of assessment and classroom environment (student-student interaction) were estimated with DWLS in Lisrel 9.1, but generated improper solutions. The hypothesised one-factor CFA models of assessment and classroom environment (student-student interaction) were excluded from further analysis. The final fit indices for the one-factor CFA models of structuring, questioning, modelling, CE(I) and CE(MB) shown in Table 5.28, suggested these models fitted the data very well (Hu & Bentler, 1999). The standardised parameter estimates for the

178

Table 5.28

Final fit indices for one-factor CFA models of teaching behaviours Model df χ2 CFI RMSEA SRMR Structuring 14 24.18 1.00 .00 .03 Questioning 9 56.15*** .99 .05 .04 Modelling 2 3.92 1.00 .00 .02 CE (I) 14 58.30*** .98 .03 .04 CE (MB) 5 11.92 .99 .03 .03 Notes. CE refers to classroom environment (teacher-student interaction), CE (MB) refers to classroom environment (misbehaviour), df = degrees of freedom, χ2 = chi square statistic, RMSEA = root mean square error of approximation, SRMR = standardised root mean square residual, ***p < .001.

hypothesised and final one-factor CFA models are depicted in Figure D7.1 to Figure

D7.10 (see Appendix D7). A key assumption of the dynamic model (Creemers &

Kyriakides, 2008) is that the teaching behaviours are interrelated (Kyriakides et al.,

2009). Thus, the next step was to determine if teaching behaviours were related. A hypothesised five-factor CFA model of teaching behaviours was specified and estimated with DWLS in Lisrel 9.1.Table 5.29 shows the fit statistics for the hypothesised five-factor model [2(340, N = 414) = 1726.34,  <.001, CFI = .98,

RMSEA = .17, SRMR = .07]. The fit indices and the standardised parameter estimates for this model shown in Figure D7.11 (Appendix D7) suggested this model did not fit the data well.

Table 5.29

Fit indices for five-factor CFA model of teaching behaviours Model Item df χ2 χ2/df CFI RMSEA SRMR Hypothesised 340 1726.34*** .98 .17 .07 1 28 314 1550.50*** 175.84** .98 .14 .07 2 29 289 1441.77*** 108.73** .98 .11 .07 3 27 265 1233.48*** 208.29** .98 .10 .06 4 19 241 1103.86*** 129.62** .99 .09 .06 5 3 220 1033.99*** 69.87** .99 .08 .06 6 2 199 939.64*** 94.35** .99 .07 .06

Note. **p < .01, ***p < .001

179

The five-factor model was re-specified using the same strategies adopted for the ill-fitting one-factor CFA models. The process of re-specification was iterative, with one item removed at a time, followed by model re-estimation and examination of the model fit statistics and parameter estimates. The fit indices for the re-specified models are shown in Table 5.29. In all six re-specifications, items were removed one at a time, following examination of parameter estimates (i.e., r2 estimates).

The item with lowest r2 was removed first, thus item deletion (in order) was, item

28, 29, 27, 19, 3, and 2. The χ2 difference tests shown in Table 5.29 suggested significant fit improvement was obtained through model re-specification. In all, the five-factor teaching behaviour model was re-specified six times, and the sixth iteration generated a model with an acceptable solution [2(199, N = 414) = 939.64,  < .001,

CFI = .99, RMSEA = .07, SRMR = .06]. The standardised parameter estimates for the five-factor CFA model of teaching behaviours are displayed in Figure D7.11 and the final item-factor specifications are shown in Table D7.2 (see Appendix D7).

The most important findings to emerge from the CFA analysis are: first, structuring, modelling, CE (I), questioning, and CE (MB) teaching factors are important effective practices of mathematics teachers in Dhaka, Bangladesh. Second, four teaching factors (time management, practice, assessment and classroom environment

[student-student interaction]) specified in the dynamic model of educational effectiveness (DMEE) (Creemers & Kyriakides, 2008) were not identified. A possible explanation for these findings is the cultural context. Most studies (e.g., Kyriakides &

Creemers, 2008, 2009; Panayiotou et al., 2014) that have validated the teaching factors in the DMEE have been conducted in European countries, it is likely student interpretation of items and hence, responses will be different in Bangladesh culminating in different findings with respect to teaching behaviours.

180

Descriptive statistics

Following recommendations of several scholars (Holmes-Smith, 2001;

Venkatraman, 1989) descriptive statistics, correlations and Cronbach alphas of latent teaching factors for the five-factor model were computed in SPSS 22.0. Several points can be made about the results shown in Table 5.30. First, there are strong positive correlations between some of the teaching behaviours. The results show teachers who demonstrate structuring are also likely to display modelling, questioning and CE (I) providing support for the integrated teaching approach in the dynamic model (Creemers

& Kyriakides, 2008).

Table 5.30

Descriptive statistics, correlations and Cronbach alpha of latent teaching factors Teaching Mean SD 1 2 3 4 5 behaviour 1. Structuring 4.32 .95 .60 2. Questioning 4.37 .94 .73 .67 3. Modelling 4.13 1.03 .83 .79 .71 4. CE(I) 4.48 .84 .89 .88 .92 .81 5. CE(MB) 2.17 1.17 -.62 -.63 -.85 -.71 .49 Notes. (1) CE (I) refers to classroom environment (teacher-student interaction), CE (MB) refers to classroom environment (misbehaviour), and (2) M = raw mean, SD = standard deviation, correlations in bold are significant, p < .01 (2 tailed), and Cronbach alphas are on diagonal between structuring, questioning, modelling and CE (I).

The item-factor specifications in Table D7.1 (see Appendix D7) indicates structuring, modelling, CE(I) and questioning are comprised of items that reflect positive aspects of these teaching factors which from the student perspective, contribute to learning. For example, item 34, ‘we spend time at the end of the lesson to go over what we have just learned’ (structuring); item 13, ‘the teacher immediately comes to help me when I have problems doing an activity’ (CE[I]); item 24, ‘when a pupil gives a wrong answer the teacher helps her/him to understand her/his mistake and find the correct answer’ (questioning); and item 44, ‘when we have problem-solving exercises

181

and tasks in Mathematics lessons, our teacher helps us by showing us easy ways or tricks to solve the exercises or tasks’ (modelling).

Second, Table 5.30 shows strong negative correlations between CE (MB) and structuring (r = -.62), questioning (r = -.63), modelling (r = -.85) and CE (I) (r = -.85) which indicate teachers who engage in structuring, questioning, modelling and CE (I) are unlikely to engage in CE (MB). Examination of the item-factor specifications in

Table D7.1 (Appendix D7) shows CE (MB) is comprised of two items (‘when a pupil gives a wrong answer in Mathematics class some of the other children in the class make fun of her/him [item 30] and ‘when the teacher talks to a pupil after they have been naughty, sometimes after a while, that pupil will be naughty again [item 33]). In contrast to structuring, modelling, CE (I) and questioning items, these items are concerned with student behaviour and likely reflect the extent to which teachers deal successfully with misbehaviour.

Third, Table 5.30 shows moderate Cronbach alpha estimates of internal consistency for constructs (structuring, questioning, modelling and CE (I). Although

Cronbach alpha for CE (MB) (α = .49) is low, the factor was retained in the five-factor

CFA model of teaching behaviours because it was part of the original instrument,

Cronbach alpha is likely to be depressed when there are only a few indicators (Raykov,

1998), and the scale was considered to be theoretically sound.

Teacher interviews.

Data collection procedures.

The purpose of the teacher interviews was to ask teachers about their orientation, structuring, modelling, practice and questioning behaviours. Generally, teacher responses were based on the lesson observation. The interviews were conducted on the same day as the observation, on-site in a quite location, and were digitally recorded with

182

permission from teacher participants. Each interview took about 25 minutes and was conducted in Bangla.

Data analysis procedures.

The three steps for qualitative data analysis (data condensation, data display, drawing conclusions and verification) recommended by Miles et al. (2014) were used to analyse the interview data. Initially, data condensation involved the transcription of interview data into text by the researcher. Next, the researcher and a peer separately coded three interview transcripts with first cycle coding based on key variables of interest (Miles et al., 2014), and then cross-checked the codes for inter-rater reliability.

There was 87% agreement in first cycle coding, and with a coding structure in place, the remaining interview transcripts were coded. Cross case data displays were developed to assist in second cycle coding and permitted systematic cross-case comparison (Miles et al., 2014). The data displays in Appendix D9 comprised names and the number of teachers, codes, categories, and themes which emerged from interview data. The data displays were used to draw and verify conclusions and this involved initially, conducting a quick scan of data displays to see if anything stood out, and then using other tactics including, pattern noting, identifying themes, comparing and contrasting, clustering and counting data (Miles et al., 2014). Additionally, conclusions were confirmed by following up surprises, triangulating, making if-then tests and checking out rival explanations (Miles et al., 2014).

Emerging themes from teacher interviews.

The results, summarised in Table D9.1, D9.2, D9.3, D9.4, and D9.5 (see

Appendix D9), show the frequency, focus, and quality dimensions of orientation, structuring, modelling, practice and questioning behaviours emerged as key themes from analysis of teacher interviews.

183

Orientation.

Table D9.1 (in Appendix D9) shows all teachers (15) articulated performance related goals and some of these teachers (3) reported mastery goals for students. For example, Momota (Nakib HS) said,

“The goal of my lesson was for students to learn how to measure the area of the rectangular square. Using the knowledge of measuring area, they will be able to measure the area of such fields and apply this knowledge in their real life”.

When asked if the students understood the goal of the lesson, all 15 teachers stated they did, and over half of these teachers identified orientation strategies (e.g., use student ideas, provide overview, use analogies, examples, or visuals) to aid student understanding of lesson goals. This is illustrated in the comment made by Nila (Tasin

HS):

“At the beginning of the class, I asked students about their exam scores in mathematics tests, then asked them to ‘sum’ those numbers followed by a briefing on the necessity of knowing how to calculate the average, median, and mode in practical life.

By such activities, I helped them understand that I would be teaching something about numbers and when I asked them about today’s lesson topic they answered that the topic is measuring mean, median, and mode”.

In addition, some teachers reported using practice activities (4) and questioning

(9) to check that students had understood the goals of the lesson, and three teachers did not report checking or assumed the students understood. For example, Adnan (Tasin

HS) stated:

“When I have evaluated them by giving an exercise of Ratio and Proportion, they were able to solve the problem successfully. Thus, I realized that they had understood my lesson goal”.

184

However, three teachers did not report checking or assumed that students had understood the goals of the lesson. This indicated in the comment made by Angel

(Nakib HS):

“At the beginning of the class, I reviewed the methods/formulas of measuring height and distance (which the students knew before), and this review made the students realise that today they are going to apply the formulas to measure the height and distance”.

Structuring.

The results in Table D9.2 (in Appendix D9) show that all 15 teachers reported structuring activities. While five of the teachers explained that structuring activities, such as breaking the whole unit into lessons with content progression from easy to difficult and gathering relevant learning material occurred at the pre-lesson stage (see

Table D9.2), most teachers (14) reported structuring activities. For example, beginning the lesson with orientation strategies, such as checking and linking lesson content to prior learning (2 teachers) and providing an overview of lesson goals (11 teachers).

Most teachers reported presenting content (14 teachers) and modelling problem-solving strategies (14 teachers), followed by application or practice activities (6 teachers), and lesson review (4 teachers). This type of structure is consistent with a direct approach and was reflected in the description provided by Hazzaz (Hamida HS):

“I pre-arranged the content and the learning material that was available. Other structuring activities included starting the lesson with yesterday’s lesson content and I linked it with the easier part of today’s topic like the mode for example, I asked them to find out what the mode was from the bunch of five pens of different colours and moved towards presenting the content from easy to the difficult part of the content gradually.

After the presentation, I gave them exercises to assess their understanding and finally, I

185

finished the lesson by summarising the main points of the whole lesson”.

Table D9.2 indicates all teachers (15) stated students understood the lesson structure and when asked how they knew this, some reported they used practice activities (6) and questioning (4) to determine student understanding. For example, Antu

(Hamida HS) said:

“When I gave them an exercise, most of the students had solved the problem following the steps that I showed them in solving problems. From this, I understood that the students had identified the lesson structure”.

Somewhat concerning is that Table D9.2 shows that five teachers did not mention how they checked for student understanding and two teachers assumed understanding.

One teacher, Babu (Mahbub HS) suggested the reason he assumed that the students understood was:

“I always come to class well-prepared and well-planned. As I have been teaching them a long time, the students are well-aware of my structure. They know my , how I move across the content”.

Modelling.

The results summarised in Table D9.3 (see Appendix D9) suggest all teachers engaged in modelling behaviours. All 15 teachers reported the focus of their modelling behaviours was the introduction of new learning to students through demonstration and application activities. Table D9.3 (in Appendix D9) indicates presentation was the main strategy adopted by teachers to demonstrate new learning. However, less than half of teachers described specific strategies to gain student participation such as linking new learning to prior learning (4), using examples (3) and questions (3), and only two teachers described seat work or group work associated with application activities. As

Kazol (Aruna HS) suggested:

186

“I have introduced new learning by mixing of the activities of presenting and analysis about sequence and series and show the process to solve different problems on the board, and inviting students to participate with me in the process mentioned before, for example, write a series, sum it and make an equation, find out the ‘N’ the term.

Above all, coordination of my lecture and students’ participation through question and answer, I have introduced the new learning”.

Table D9.3 shows all 15 teachers thought students had learned something new in their lesson. When asked how they knew this was the case, most teachers (13) suggested it was because the students were able to successfully apply the problem-solving strategy they had modelled. This is illustrated in the comment made by Adnan (Tasin HS):

“Yes, I think that most of the students had learned new things today. Though students do have some preliminary ideas about ‘ratio’ from previous years, however, the idea of ‘proportion’ is totally new to the students. Also, solving mathematical problems related ratio and proportion are new to the students. I understood that students learned the provided new content from the problem solved during the group work, and also from student answers to the questions I asked. During the group work, most of the students solved the problem correctly, and in individual questioning most of the students responded right”.

Practice.

Table D9.4 (in Appendix D9) indicates all teachers (15) mentioned practice activities. When asked why practice activities were provided most teachers (13) suggested it was to check student understanding, and some teachers (4) added that mastery to facilitate deep learning were important considerations. As Bindu (Kanta HS) said:

“I give them individual exercise to check whether the students have learned the

187

learning and if they can solve the exercise correctly they will be able to practice similar types of exercises at their home. Such practicing will help them to master the topic and to solve the difficult ‘index’ problem”.

Table D9.4 shows that five teachers also mentioned the reason for using practice activities was to engage students in more active learning with their peers. For example,

Babu (Mahbub HS) stated:

“I have given them the group exercise. Usually, in my classes, I use group work and ask the students to solve the problem in a group. I prefer group work as it gives the weaker students an opportunity to learn from better students more comfortably than from the teacher. Today I also did the same, I gave them an exercise to solve in groups to know whether students had any difficulties in learning today’s topic and to evaluate the level of student understanding in today’s content”.

Additionally, one insightful teacher noted the importance of practice activities for assessing “…is my teaching strategy appropriate for the students or I need to change the strategy?” (Kazol, Aruna HS). Kazol’s comment suggested he recognised the direct relationship between the quality of teaching strategies and student learning in his classes.

In terms of the types of practice activities, Table D9.4 shows most activities involved individual seatwork (8) or group work (8), primarily focused on similar tasks

(12) to the ones modelled by teachers, with few teachers (3) reporting setting more complex tasks for practice activities or homework. This is illustrated in a comment made by Bindu (Kanta HS):

“At the end of the class, I gave them a more difficult exercise to practice in the class. Also to practice at home, I had also given some exercises as homework”.

188

Questioning.

Table D9.5 (see Appendix D9) shows most teachers (14) engaged in questioning which was both product and process related. Most teachers reported questioning activities were important to ascertain student understanding (14). They used product questions to evaluate, motivate and enhance student understanding (14) and used process questions to enhance deep learning (14). The comments made by Shilpo

(Aruna HS):

“In my class, there is a mixture of three types of students: good, average, and weak. Therefore, I start with questions that can be answered by all students, and focus on the weaker students to provide the answer. From the easy questions, I move gradually to the harder questions to increase the skills of the stronger students”, are typical of teacher responses and suggest teachers recognised the need to differentiate the type of questions with reference to student level of understanding.

Table D9.5 shows that while most teachers (12) thought that all students understood their questions, some qualified this by suggesting it was dependent on student level. For example, Momota (Nakib HS) said, “Not in all contexts, all questions are clear and understandable to all students. It varies according to students’ ability”.

In order to assist student understanding of questions, Table D9.5 indicates teachers used word repetition (4), asking only content related questions (8) or clarifying the question (2) for students, and two teachers did not mention their strategies. This was suggested by Hazzaz (Hamida HS):

“My questions seem clear and understandable to my students as I usually try to make my voice clear and nice to present my content understandable and as well as the questions. Also, I ask questions relevant with the content to evaluate students’ understanding”. This comment seems to indicate teachers see value in enabling student

189

understanding of questions and apply several strategies (i.e. repetition, question clarity, content elated questions) to facilitate that understanding.

Table D9.5 shows that most teachers assumed students understood their questions.

However, Table D9.5 indicates some teachers (4) would create more discussion if a student gave a partial answer to question, to elicit the correct answer. For example,

Moni (Tasin HS) stated: “With their partial or half answer I discuss and add information to come to the full answer which makes the concept more clear and understandable”.

Further, in the event of students’ not answering these teachers would provide the answer or probe to elicit the correct answer. This was suggested by Priya (Aruna HS) in her response:

“While questioning, I move from the easy topic (what they knew earlier) to more in-depth questioning of the present content. I help them by giving hints to find thenswer and solve the problem by themselves. The goal of doing such questioning is to explore all students’ deep learning”.

Discussion of results and research questions.

This section discusses the results from phase one and two of the study in the context of the relevant research question.

Research question one.

Research question one asked, ‘what are the 20 highest performing secondary schools in mathematics within Dhaka Metropolitan City (DMC), Bangladesh?

The results from phase one of the study showed the 20 highest performing secondary schools in mathematics had value-added scores ranging from 1.91 to 1.07,

(M = 1.27, SD = .23). The mean value-added score of the 20 highest performing schools was 1.03 above the mean value-added scores computed for the population of 380

190

secondary schools, (M = .24, SD = .56) within DMC, Bangladesh. Further, the 20 highest performing secondary schools shared one notable characteristic, they were all private schools. Apart from this, there were no other clear patterns among the 20 highest performing schools. For example, the size of the student population ranged from 80 to

1268 students, most schools (17) employed up to five mathematics teachers (which did seem to be determined by the size of the student population, more than half (13) catered for students in grades 1-10, one-fifth (4) catered for students in grades 1-12 and three schools catered for students in grades 6-10.

Research question two.

Research question two asked, ‘what teaching behaviours are demonstrated by mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC, Bangladesh?’

It may be stated, based on synthesised results from teacher observations, student questionnaires and teacher interviews that teachers displayed quantitative characteristics of orientation, structuring, modelling, practice, questioning, and classroom environment, but did not display time management and assessment behaviours.

However, several aspects of this finding require further discussion.

First, no differences were found between teachers in terms of the number and duration of tasks related to teaching behaviours. Thus, there were similarities between teachers with respect to quantitative characteristics of teaching behaviours.

Second, whilst teachers displayed similarities in the number of orientation, structuring, modelling, practice, questioning, and classroom environment behaviours, tasks, the percentage of teaching time was different. For example, Figure 5.2 showed modelling accounted for 46%, practice for 21%, and classroom environment (teacher- student interaction) for 12% and orientation for 11% of available teaching time, while

191

other behaviours together accounted for 10% of available teaching time (see Figure 5.2).

Modelling is related to promoting higher order thought processes and problem- solving strategies (Panayiotou et al., 2014). Effective teachers use modelling strategies to help students internalise, understand and apply content learned to different types of problems (Ayres et al., 2004; Borich, 2015). Modelling has been reported (Borich,

2015) to take a considerable portion of teaching time (up to 40%) and involve seatwork and one-to-one dialogue in which teacher responses target student achievement, interest, and ability. It is apparent teacher modelling behaviours were consistent with previous research.

Practice is the opportunity for students to apply new knowledge through problem- solving (Vanlaar et al., 2014). When practice is goal directed, challenging and sufficiently frequent, student learning and achievement is enhanced (Ericsson &

Charness, 1994; Ericsson & Lehmann, 1996). Although, the teachers displayed practice behaviours it is suggested the frequency of practice may be insufficient to aid student learning and achievement. Further, the duration of practice tasks suggests teachers provided massed practice opportunities (Schmidt, 1991) rather than distributed practice opportunities (Bloom & Shuell, 1981) associated with higher levels of student learning and achievement.

Teacher-student interactions comprised 12% of teaching time, suggesting a low level of interaction with students. Teacher-student interactions contribute to classroom climate, and refer to the consistent flow of information between teachers and students related to perceptions, attitudes and feelings about each other and current learning tasks

(Burns, 1982; Rogers, 1982). Researchers (Davis, 2003; Wentzel, 2009) have reported teacher-student relationships contribute to student motivation, learning, performance and school completion. The low levels of teacher-student interaction suggested teachers

192

were aware of the importance of the role they had in creating a positive learning environment, however, teachers under-utilised this teaching behaviour.

Teachers demonstrated orientation tasks for about 11 % of teaching time.

Orientation involves providing goals related to lesson tasks, lesson of several lessons and challenging students to identify reasons why a task is important. Orientation is expected to increase levels of student participation and engagement as it makes tasks more meaningful for students (De Corte, 2000; Paris & Paris, 2001).

Teacher structuring behaviours enhance student learning and achievement by beginning with goal reviews and outlines, signalling transitions, focusing attention and reviewing key ideas at the end of lesson (Rosenshine & Stevens, 1986). It was apparent in this study, that structuring tasks took a minimal amount of teaching time. During the teacher interviews several teachers suggested structuring was completed in the planning phase before the lesson was taught (see Table D9.2 in Appendix D9), which may explain why this behaviour was not observed to account for a larger percentage of teaching time.

Even though, every teacher demonstrated questioning behaviours, questioning accounted for a minimal teaching time (3%) (see Figure 5.3). This result contrasts rather sharply with a study by Cotton (1989), who reported questioning behaviour was the second most dominant teaching method (after teacher talk), and also the 35-40% of teaching time reported by Long & Sato (1983) and van Lier (1998). Given the number of tasks associated with questioning (e.g., asking a question, wait time, the answer and feedback), it seems the teachers may not use questioning in a way that contributes to student learning and achievement. Research (Brophy & Good, 1986; Evertson et al.,

1980) has shown effective teachers ask lots of questions, are aware of the effects of asking different types of questions, (e.g., product questions enhance surface learning

193

and process questions enhance deeper learning) (Ayres et al., 2004), and provide feedback designed to elicit a correct response (Muijs et al., 2014).

Teachers were rarely observed encouraging cooperation and group work or dealing with student misbehaviour (see Figure 5.1 and Figure 5.2). Whilst the low frequency of dealing with student misbehaviour may be attributed to cultural norms, the low frequency and duration of teaching behaviours to encourage cooperation and group work suggests an emphasis on individualistic learning. Individualistic learning is not as effective as cooperative learning methods (Hattie, 2009). Johnson. Maruyama, Johnson,

Nelson & Skon (1981) reported cooperation with intergroup competition was superior to interpersonal competition and individual effort in promoting achievement across all subject areas, and effects increased as a student moved from primary to high school.

Similar findings were reported by Hall (1988).

Time management is related to opportunity to learn and time on-task, and is therefore, a significant predictor of student learning and achievement (Emmer &

Evertson, 1981). Whilst time management behaviour was not directly observed in this study, it may be implied, given evidence of other teaching behaviours (e.g., orientation, modelling, practice, questioning and teacher-student interaction) which facilitated on- opportunities to learn and on-task student behaviour.

Assessment enables teachers to identify student needs and evaluate teaching practices (Creemers & Kyriakides, 2012). However, in this study, teachers were not observed in tasks related to assessment. This may have been due to the timing of teacher observations; these were conducted when new topics were introduced to students. Further, it is possible more assessment task would occur later in the teaching of a topic, particularly if assessment was undertaken for summative rather than formative purposes.

194

Third, the pattern in teaching behaviours suggests teachers build student understanding of subject matter most often through modelling, practice, orientation and teacher-student interaction, less often with structuring and questioning tasks, and rarely through encouraging competition, cooperation, and group work and dealing with student misbehaviour. It is contended this pattern in teaching behaviours is indicative of a direct teaching approach. The direct teaching approach, also known as direct instruction is often incorrectly associated with transmission or didactic teaching (Hattie,

2009). The results from Hattie’s (2009) meta-analysis showed direct instruction has an effect size of .59. Adams & Engelman (1996) suggested direct instruction comprised seven steps: planning, orientation, modelling, guided practice, closure, independent practice and evaluation. These steps are largely consistent with the three phase lesson structure adopted by teachers in this study that incorporated orientation, modelling and opportunities to practice.

Hattie (2009) posited direct instruction demonstrates the importance of stating learning intentions and success criteria at the start, then moving towards these by inviting students to learn through deliberate practice, modelling, feedback and multiple opportunities to practice. Others, (e.g., Ayres et al., 2004) would claim this approach is effective because it is consistent with how students learn.

Last, there are notable differences between the observed teaching behaviours and student rated teaching behaviours (see Table 5.31 for comparison). It is likely the student rated teaching behaviours reflect student perceptions of the teaching behaviours that assist in building their understanding of subject matter, and observed teaching behaviours reflect teacher views about effective teaching behaviours.

195

Table 5.31

Observed and student rated teaching behaviours Teaching behaviours Observed teaching Student rated teaching behaviours behaviours Orientation  Structuring   Modelling   Questioning   Practice  CE(I)   CE(MB)   Time management Assessment

Note. CE (I) refers to classroom environment (teacher-student interaction) and CE (MB) refers to classroom environment (misbehaviour).

Whilst Table 5.31 shows overlap between observed and student rated teaching behaviours, (i.e., modelling, structuring, questioning, teacher-student interaction and dealing with misbehaviour), the underlying common denominator in the pattern of student rated teaching behaviours is that most of these teaching behaviours involve some form of teacher-student interaction. Research (Creemers & Reezgit, 1999;

Freiberg & Stein, 1999) the way the teacher interacts with students and classroom management practices contribute to classroom climate. The students viewed teacher- student interactions as the key contributor to their learning and achievement. It is apparent the student view is consistent with research (e.g., Cornelius-White, 2008) that has shown negative student motivation and school adjustment when teacher-student relationships are poor. Moreover, Cornelius-White (2008) noted most students who dislike school, dislike it mainly because they dislike their teacher.

In sum, teachers demonstrated the quantitative characteristics of teaching behaviours, with the exception of time management and assessment. The patterns in teaching behaviour indicated a direct teaching approach was adopted by the teachers to

196

build student understanding of subject matter. Interestingly, student rated teaching behaviours all involved a level of interaction with teachers, but this seemed to be under- utilised by teachers. Nonetheless, it is recognised these results provide only part of the picture because effective teaching behaviours are multidimensional, and consideration of qualitative characteristics of teaching behaviours is necessary to determine the effective functioning of teaching behaviours (Creemers & Kyriakides, 2015a).

Research question three.

Research question three asked, ‘are teacher characteristics (i.e., experience, qualifications, self-efficacy beliefs) related to teaching behaviours of mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC,

Bangladesh?’

The teachers in this study were very experienced (e.g., 9-27 years), they were highly qualified (e.g., 60% had a Master’s degree), and expressed high levels of self- efficacy in relations to teaching behaviours. Based on results from Fisher’s exact test it can be stated that teacher characteristics (i.e., teaching experience, qualifications, self- efficacy beliefs) are not related to the teaching behaviours of mathematics teachers in the 20 highest performing secondary schools within DMC, Bangladesh.

With respect to teaching experience, previous investigations (e.g., Rivkin,

Hanushek & Kain, 2005) have reported no difference between teachers with more than five years teaching experience. Similar to the study by Rivkin et al. (2005), no differences were found between teachers in this study, all of whom had more than five years teaching experience (see Table D1.1 in Appendix D1) .

Previous research (see Askew et al., 1997; Medwell, Wray, Poulson & Fox,

1998), have shown small, indirect associations between pedagogical knowledge and teaching, and no association between subject knowledge and teaching practices

197

(Darling-Hammond, 2000). Creemers & Kyriakides (2015) contended higher levels of subject knowledge do not necessarily lead to effective teaching practices, and even if teachers may have acquired knowledge of effective teaching practices from teacher training, they may not necessarily apply this knowledge to classroom teaching practice.

The results from this study would seem to support this contention.

Contrary to findings reported in a large number of previous studies (see pp. 26-

29), no association was found between teacher self-efficacy beliefs for orientation, structuring, modelling, practices, questioning and classroom environment and the quantitative characteristics of these teaching behaviours (see Table D6.5 in Appendix

D6). Whilst this result might be attributable to small sample size and using less powerful nonparametric statistics, another explanation is that, the relationship is curvilinear as Monk (1994) reported for subject knowledge. If the relationship between self-efficacy beliefs and teaching behaviours is curvilinear, then it indicates teachers need a certain level of self-efficacy to display effective teaching practices, but beyond this, there is likely to be a negative association. According to Creemers & Kyriakides

(2006), several studies (e.g., Schunk, 1991; Stevenson, Chen, & Lee, 1993) provide empirical support for this contention.

Research question four.

The final research question asked, ‘what level of skill in teaching behaviours is demonstrated by mathematics teachers in the 20 highest performing secondary schools in mathematics within DMC, Bangladesh?’

Generally, it may be said that teachers demonstrated skill at level five in the dynamic model (Creeemers & Kyriakides, 2008), achieving quality in orientation, modelling and teacher-student interaction, but not in differentiation. In addition, teachers displayed skill at level three in structuring, practice, questioning, and student-

198

student interaction which was consistent with acquiring quality in active and direct teaching approaches (Creemers & Kyriakides, 2012).

At level one in the dynamic model, teaching skill is determined by measuring the quantitative characteristics of structuring, practice, teacher-student interaction and questioning behaviour. The results showed that more than ninety percent of teachers were able to use structuring, practice and teacher-student interaction routinely in teaching, but only 60% of teachers were proficient in using questioning in this way (see

Figure 5.3).

At level two and three, a similar pattern emerged, where skill levels determined by the qualitative aspects (i.e., stage, focus and quality) of structuring, practice, teacher- student interaction and questioning behaviours showed less than half of the teachers were able to put these qualitative aspects into questioning (see Figure 5.3). However, interview results seem somewhat contrary, as teachers reported using a mixture of question types and feedback strategies consistent with effective questioning techniques

(Gall, 1984; Levin & Long, 1981; Weil & Murphy, 1982; Wilen & Clegg, 1986).

A possible explanation for this discrepancy is that teachers have knowledge of the qualitative aspects of questioning from teacher training, but do not necessarily apply them effectively, and hence, they were not observed. It seems there are differences between teachers in terms of capacity to use the qualitative aspects of questioning, as only about half of the teachers were able to demonstrate the stage, focus and quality dimensions of questioning at level three. Further, closer examination of questioning behaviours suggests that the infrequent use of questioning (see Figure 5.2), emphasis on product questions and feedback strategies (see Table 5.24) suggest teachers wanted to direct students to correct answers as quickly and efficiently as possible. This type of drill and practice sessions are associated with direct instruction when the teacher is the

199

major provider of information (Borich, 2015). The dominant use of product type questions is consistent with research evidence (Dillon, 2004; Walsh & Sattes, 2011).

At level three in the dynamic model, the quality aspects of structuring were displayed by 80% of teachers. This means that teachers were able to achieve lesson clarity through effective use of structuring activities. Teachers reported in interviews structuring involved planning and a three-part lesson structure comprised of the following elements: orientation, presentation, modelling, application and review. Most teachers noted the use of presentation and modelling, and fewer teachers noted application and review as elements in this lesson structure (see Table D9.2). The presence of these elements in this structured lesson format is consistent with direct instruction methods of teaching (Adams & Engelman, 1996).

In addition, to the teaching behaviours associated with direct teaching (i.e., structuring and questioning) at level three, the frequency, stage and focus dimensions of teacher role in developing student-student interaction are included in the dynamic model. The results indicated 60% of teachers, displayed level three teaching skills in encouraging interaction among students, which occurred at different lesson stages to facilitate active student participation in learning (see Figure 5.3). Although, some teachers were able display behaviours consistent with encouraging interaction among students, only about one percent of the lesson was devoted to this activity (see Figure

5.2). This pattern of behaviour suggests teachers recognized the importance of creating a cooperative learning environment, and indicates the limited scope for this activity within a direct teaching approach.

Of the eight teaching behaviours situated at level 4 in the dynamic model, only the stage aspect of orientation and modelling were demonstrated by teachers (see Figure

5.3). Thus, teachers were able to effectively use some strategies associated with active

200

and newer teaching approaches (Creemers & Kyriakides, 2012). Although, teachers did not report this aspect of orientation and modelling in interviews, the results from observations indicate most teachers (see Figure 5.3) utilised these activities at multiple stages in the lesson. Thus, ensuring orientation and modelling would have a continuous direct or indirect effect on student learning and achievement (Creemers & Kyriakides,

2012).

Teaching skill at level five in the dynamic model is determined by examining the focus, quality and differentiation aspects of orientation, modelling, and the quality and differentiation aspects of teacher-student and student-student interaction. With respect to orientation, the observation results suggested 87% of teachers were proficient in using orientation activities to help students understand the importance of lesson tasks

(see Figure 5.3). Interview results indicate all teachers used orientation activities to inform students of between one to three performance goals related to the lesson unit.

Further, some teachers mentioned using a number of strategies (e.g., check prior learning, using student ideas, overviews, examples, analogies, questions and visuals to explain and clarify the importance of lesson tasks (see Table D9.1). However, the results showed about half of the teachers (53%) demonstrated quality aspects in orientation task (see Figure 5.3). In other words, just under half of the teachers (47%) were not able to use orientation activities to help students understand the purpose of lesson tasks shown to increase motivation and student participation (Creemers &

Kyriakides, 2006).

Most teachers were able to demonstrate skill in the focus dimension of modelling

(see Figure 5.3). Teachers reported (from interviews) using a range of modelling strategies: presentation, making links to prior learning, using examples and questioning, and student participation to help students to use and/or develop their own problem

201

solving strategies (see Table D9.3). However, only 47% of teachers were observed to demonstrate the quality dimension of modelling activities. The interview results indicated most teachers modelled a problem-solving strategy after a problem was presented, and the results suggested this was an appropriate modelling approach because students applied strategies correctly to achieve and solved problems successfully (see

Table 5.24).

With regard to the quality dimensions of classroom interactions, all teachers displayed quality aspects of teacher initiated interactions, but only 40% of teachers demonstrated quality in encouraging interaction between students to establish on-task behaviour (see Figure 5.3). While it might be concluded that teachers lack skill in this particular aspect of teaching behaviour, it is suggested this behaviour may not compatible with the underlying direct teaching approach in Bangladesh. Thus, a possible explanation for this result is that teachers are not reliant on interactions to generate on-task behaviour; instead, most depend on rules, rewards and consequences to establish on-task student behaviour (associated with this type of teaching approach

Borich, 2015). If teachers don’t allocate much time to this activity (see Figure 5.2), then it is reasonable to expect proficiency deficits.

In addition, it seems teacher-initiated interaction may be more compatible with the underlying direct teaching approach, as teachers demonstrated skill in the quality aspects of this teaching behaviour (see Figure 5.3). Further, the results have indicated most teacher initiated interaction occurred when teachers presented material, made comments, engaged in instruction or posed problems or questions. These activities are consistent with a direct teaching approach (Borich, 2015)

At level five in the dynamic model, teachers can effectively use the differentiation dimension of teaching behaviours. Differentiation refers to the extent to which teaching

202

behaviours differ according to student needs. Teachers were not observed to differentiate teaching behaviours (see Figure 5.3), but several teachers indicated in interviews that they differentiated questions based on student ability (see Shilpo’s comment, p.172). These rather mixed results possibly suggest acceptance of differentiation for some teaching behaviours (i.e., questioning) and not for other behaviours. However, further investigation is needed to ascertain how differentiation is viewed in Bangladesh because this is likely to underlie the extent to which teachers utilise the dimension of teaching behaviour.

Summary

This chapter has reported and discussed the results with reference to the research questions investigated in the study. In the final chapter, the main findings are summarised in the context of the purposes of the study, the strengths and limitations of the study are discussed, and last implications of findings for theory, practice and future research are considered.

203

Chapter 6 Conclusions

The purpose of this final chapter is to firstly summarise the main findings of the study, and then discuss the strengths and limitations of the study, and lastly, consider the implications of the findings for theory, practice and future research.

Study purpose

The study set out to first, identify highly performing secondary schools in mathematics within Dhaka Metropolitan City (DMC), Bangladesh, and second to investigate the teaching behaviours of secondary mathematics teachers in the twenty highest performing schools. To accomplish this twofold purpose, a sequential mixed method approach was adopted, generating several findings that provide insight with regard to teaching behaviours of effective secondary mathematics teachers in the twenty highest performing secondary schools within DMC, Bangladesh. A two-step sampling approach was used to identify the teachers. First, value-added measures based on student grades in the JSC (2010) and SSC (2013) mathematics examination were used to identify the 20 highest performing secondary schools, and second, a purposive sampling strategy was used to select the effective mathematics teachers from these schools.

Summary of main findings

The findings from phase one of the study have shown the 20 highest performing secondary schools in mathematics within Dhaka, Bangladesh (see Chapter 5). The most notable shared characteristic of the 20 highest performing secondary schools is that they are all private schools, and apart from this characteristic, no other pattern is evident.

This is an important finding because it provides insight with regard to individual school contexts that have shaped the teaching practices investigated in the study.

204

The findings from phase two (i.e., investigation of teaching behaviours) of the study showed that teachers demonstrated orientation, structuring, modelling, practice, questioning, and classroom environment behaviours, but did not use time management and assessment behaviours. The teachers demonstrated similar teaching behaviours, and were very experienced, highly qualifed and expressed high levels of self-efficacy for teaching behaviours. However, no relationships were found between teacher experience, qualifications and self-efficacy beliefs and the quantitative characteristics of teaching teaching behaviours.

Patterns in the quantitative characteristics of teaching behaviours suggested teachers adopted a direct teaching approach (i.e., consistent with direct instruction). In this approach, teachers mostly utilised orientation, modelling, practice and teacher- student interaction, and to a lesser extent structuring and questioning behaviours to build student understanding of the subject matter. The teaching behaviours suggested teachers adopted the role of activator rather than facilitator of student learning (Hattie,

2009). High correlations among the student rated teaching behaviours suggested teacher structuring, modelling, questioning and student-teacher interaction behaviours were related. Patterns in student rated teaching behaviours emphasised teaching behaviours that involved interactions with teachers. Additionally, teachers were rarely observed encouraging competition, cooperation and groupwork among students, which indicated an individualistic learning environment.

The main findings from the analysis of quantitative and qualitative characteristics of teaching behaviours suggested teachers displayed quality teaching (i.e., teaching skill at the highest developmental level (five) in the dynamic model) (Creemers &

Kyriakides, 2012), in some teaching behaviours; orientation, modelling and teacher- student interaction behaviours, but did not differentiate teaching behaviours according

205

to student needs. Whilst, in other teaching behaviours; structuring, practice, questioning and student-student interaction, teachers displayed teaching skills consistent with acquiring quality in active and direct teaching approaches (i.e., teaching skill at developmental level three in the dynamic model) (Creemers & Kyriakides, 2012).

These findings reflect variation in the levels of teaching skill between and within teachers related to the qualitative characteristics (e.g., focus, stage, quality and differentiation) of teaching behaviours. However, there is no clear pattern with regard to the qualitative use of teaching behaviours, with the exception of questioning which seems to be weaker, than other teaching behaviours.

Aside from this, it is apparent from the findings that teachers’ under-utilized teaching behaviours that contribute to interactive and collaborative learning environments, such as encouraging competition, cooperation and group work. Although,

Hattie (2009) reported the ‘power of peers in learning equation’ (p. 212) and the

‘superiority of cooperation with intergroup competition over interpersonal competition and individualistic efforts’ (p. 213), the teachers in this study chose not to use them.

There may be a number of reasons for this, such as cultural factors, class sizes or mixed ability of students, but these are speculative and the reasons could not be determined in this study, and it is suggested require further investigation.

Thus, it may be concluded from this study, that effective secondary mathematics teachers in the 20 highest performing secondary schools in Dhaka, Bangladesh utilised teaching behaviours consistent with a direct teaching approach. Further, even though teachers demonstrated the quantitative characteristics of teaching behaviours associated with this approach, they were not necessarily proficient in the qualitative characteristics

(e.g., focus, stage, quality and differentiation) of teaching behaviours. In addition, teachers were found to under-utilise some teaching behaviours (e.g., encouraging

206

competition, cooperation and group work) known to create interactive and collaborative learning environments.

Strengths and limitations of study

This study has a number of strengths. First, the study has provided new insight about teaching behaviours of effective secondary mathematics teachers within DMC,

Bangladesh. Central to obtaining these new insights has been the application of the dynamic model of educational effectiveness (Creemers & Kyriakides, 2012) to the

Bangladesh education context. Rather significantly, this study is the first to apply this model to Bangladesh. Second, this study is one of, only a few studies conducted in

Bangladesh that has gathered data through teacher observations. The gathering of data through teacher observations has enabled accurate reporting of real time teaching behaviours that are not readily identified by teachers in self-reports. Last, the study adopted a mixed method approach and this enabled triangulation of data sources and methods facilitating inclusion of multiple perspectives and to the robustness of research findings.

Despite these strengths, the study has at least three limitations. The first limitation concerns the use of teacher observations. Most teacher observations are susceptible to bias from a cluster of sources (Cohen et al., 2011). One source in this study, may have been from teachers feeling that they had to perform because they were being studied.

Another source may have been from the one-off, 40 to 50-minute teacher observation.

One lesson is a short period of time and it is possible that atypical lessons were observed. In addition, although the inter-rater reliability (kappa = .72, p < .001) represented a good level of agreement (Elliott & Woodward, 2007), it was not an overly high level between the two observers. Further, data from teacher observations were analysed with nonparametric statistical tests, and while appropriate for this type of data,

207

these tests are less powerful than parametric statistical tests, and less able to demonstrate significant effects (even if a real effect exists) (Morgan et al., 2012).

The second limitation is related to the sampling. A small sample of teachers participated in the study. The use of a purposive sampling strategy that focused on identifying mathematics teachers from the 20 highest performing secondary schools within Dhaka, Bangladesh would have excluded teachers of students’ achieving outstanding results in other subjects due to teacher effectiveness. Further, teachers were chosen because they were employed in one of the 20 highest performing secondary schools and this excluded mathematics teachers in other secondary schools within

Dhaka, Bangladesh. Thus, the findings may not be totally representative, nor generalizable to the population of effective secondary mathematics teachers within

Dhaka, Bangladesh.

A third limitation concerns the use of value-added measures to identify the 20 highest performing schools in the first phase of the study. Several scholars (e.g., Braun,

2005; Glass, 2005; Kupermintz, 2003; McCaffrey et al., 2004) argued value added measures were problematic as it is difficult to isolate individual teacher contributions from a range of other factors, such as school characteristics, likely to shape student learning and achievement.

In sum, it is recognised the study has a number of limitations, the findings need to be interpreted cautiously and there is a need for further investigation. Notwithstanding the limitations of this study, it contributes new insight about the teaching behaviours of effective secondary mathematics teachers in Bangladesh.

Implications for theory and practice

The findings from this study make several contributions to the literature and have implications for practice.

208

The study conducted in Bangladesh contributes to the international dimension to educational effectiveness research. The findings lend support to the contention made by

Panayiotou et al. (2014) that effective teaching can be described in similar ways in different contexts. Based on results in this study, effective teaching behaviours include orientation, structuring, modelling, practice, questioning and teacher-student interactions that contribute to classroom learning environments. These teaching behaviours have been previously identified in research on teacher effectiveness.

The findings from the study provide partial support for teaching factors and dimensions in the dynamic model of educational effectiveness (DMEE) (Creemers &

Kyriakides, 2008), and emphasise the need for further studies, particularly in cross- cultural contexts. For example, two teaching factors; time management and assessment were not identified from either, the student ratings or teacher observations, and while this may be attributed to methodological limitations, there is a need for further investigation. Further, mixed results in regard to the differentiation dimension of teaching behaviours indicate differentiation may not be interpreted in the same way in

Bangladesh (e.g., it may not be considered fair). Studies that utilise other ways to measure this dimension may generate insight about the operation of this dimension, particularly in cross-cultural contexts.

In addition, the findings from the study raise questions regarding the use of direct, active and constructivist teaching approaches in the dynamic model. In this model, teaching approaches are defined in terms of teaching behaviours (e.g., direct teaching with structuring, questioning and application, and active and constructivist approaches with orientation and modelling). However, justification for the association of specific teaching behaviours with each teaching approach is not clear. Many researchers (e.g.,

Ayres et al., 2004; Hattie, 2009) would argue that orientation and modelling are equally

209

characteristic of a direct teaching approach. Also, the results from this study suggested interrelationships between teaching behaviours indicative of an integrated teaching approach. Whilst the dynamic model is supportive of an integrated approach, it is suggested that these results have emphasised the importance of teachers as activator rather than facilitator of student learning. Hattie’s (2009) meta-analysis findings are supportive of this approach.

Despite support for an integrated teaching approach (Creemers & Kyriakides,

2008), the developmental levels of teaching skill (associated with the model) suggest lower levels of teaching skill are associated with a direct approach and higher levels associated with a constructivist approach. Thus, more highly skilled teachers use a constructivist approach and less skilled teachers use a direct teaching approach.

However, the findings from this study showed effective mathematics teachers adopt a direct teaching approach. The use of a teacher directed approach enabled teachers in this study to build understanding of the subject matter. It was consistent with direct instruction or explicit teaching (Rosenshine, 1979; 1986), and involved systematic use of orientation, structuring, modelling, questioning, practices and teacher- student interaction behaviours to build student understanding of subject matter. The use of a direct teaching approach is not without critics (e.g., Berg & Clough, 1991; Driscoll,

2005), and is contrary to existing practices. Nevertheless, substantial research evidence

(Anderson, Reder & Simon, 1995; Hattie, 2009; Sweller, Kirschener & Clark, 2007) has accumulated to support the effectiveness of this teaching approach.

Nonetheless, given the result from this study is contrary to existing literature which emphasises constructivist approaches, further research conducted particularly in cross cultural contexts is warranted.

The findings provide important insights into the teaching behaviours of effective

210

secondary mathematics teachers relevant to policymakers, teacher educators, school leaders and practitioners. For policymakers, the findings have implications for the development of policy on quality teaching, and offer insight with regard to professional development programs. The findings from the study have shown that quality teaching comprises the quantity and quality of teaching behaviours, and professional development programs should address both, if teaching skills are to be improved.

Further, professional development programs should be differentiated to address the specific developmental needs of teachers.

For teacher educators, the findings from the study have suggested training courses need to focus on both the quantitative and qualitative characteristics of teaching behaviours to build teaching skills. Further, teaching behaviours should not be taught in isolation, the study has shown effective teachers integrate teaching behaviours into a direct approach that involves being an activator rather than a facilitator of student learning and achievement.

For school leaders, the findings from the study have illustrated the utility of using a validated framework to systematically investigate the quality of teaching in school improvement efforts, more likely to generate reform that is evidence-based.

The study provides guidance to practitioners with regard to teaching behaviours consistent with quality teaching practices. To demonstrate quality teaching practice, practitioners should integrate these teaching behaviours (i.e., orientation, structuring, modelling, practice, questioning and classroom environment) into teaching practices (if they have not already done so) with consideration of not only the quantitative, but also the qualitative aspects of teaching behaviours.

211

Directions for future research

The study provides the first assessment of teaching behaviours of effective secondary mathematics teachers in Bangladesh, and serves as a base for future studies of teaching behaviour. Future studies should investigate the generalisability of the study findings, particularly with regard to the quality of teaching. Qualitative case studies may provide insight about variations in the qualitative use of teaching behaviours. The findings from such studies could be used to develop professional development programs that specifically address weaker teaching skills. The dynamic model could be used as a framework in longitudinal studies of quality teaching which would facilitate the development of evidence-based policy and quality teaching practice in Bangladesh.

Based on the conceptual framework of the study (see page 76), future studies could investigate the interrelationship between teacher self-efficacy for teaching and teacher background factors (e.g. teaching experience, education, certification) which was not examined in the present study. As student background factors such as gender, socio-economic status (SES) variables (for example, family background characteristics) may influence student performance

(Rice, 1999), further studies examining the relationships between student and teacher factors may yield new findings that will improve the quality of secondary education in Bangladesh.

In sum, this is an important study it has contributed new insight into the teaching behaviours of effective secondary mathematics teachers in Bangladesh.

These teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher-student interaction that creates an invidualistic learning environment. The variation in developmental

212

levels of teaching skill indicate teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively. This should be the starting point for further research to improve the quality of teaching in

Bangladesh.

213

References

Adams, G., & Engelmann, S. (1996). Research on Direct Instruction: 25 years beyond DISTAR. Seattle, WA: Educational Achievement Systems.

Ahmed, M. (2013). The Post-2015 MDG and EFA agenda and the national discourse about goals and targets: A case study of Bangladesh. NORRAG Working paper no. 5. Geneva: Graduate Institute of International Education and Development Studies (IHEID).

Ahmed, M., Hossain, A., Kalam, A., & Ahmed, S. (2013). Education Watch 2011-12, Skills development in Bangladesh: Enhancing the youth skills profile. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Ahmed, M., Nath, R., & Ahmed, S. (2003). Education Watch 2002, Literacy in Bangladesh: Need for a new vision. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Ahmed, M., Nath, R., Hossain, A., & Kalam, A. (2006). Education Watch 2005, The state of secondary education: Progress and challenge. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Aldridge, A., & Levine, K. (Eds.). (2001). Surveying the social world. Buckingham, UK: Open University Press.

Aliaga, M., & Gunderson, B. (2002). Interactive statistics. Upper Saddle River, NJ: Pearson Education.

Allinder, R. (1994). The relationship between efficacy and the instructional practices of special education teachers and consultants. Teacher Education and Special Education, 17, 86-95. doi: 10.1177/088840649401700203

Allinder, R. (1995). An examination of the relationship between teacher efficacy and curriculum-based measurement and student achievement. Remedial and Special Education, 16, 247-254. doi: 10.1177/074193259501600408

Ambrose, S., Bridges, M., Di Pietro, M., Lovett, M., & Norman, M. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.

and issues. Cambridge Journal of Education, 21, 309 -318. Andaleeb, S. (Ed.). (2007). Political culture in Bangladesh: Perspectives and analysis. Dhaka, Bangladesh: Bangladesh Development Initiative.

Anderson, J. (1980). Cognitive psychology and its implications. San Francisco, CA: Freeman.

Anderson, J., Reder, L., & Simon, H. (1995). Applications and misapplications of cognitive psychology to . Retrieved from http://www.psy.cmu.edu/~mm4b/misapplied.html.

214

Anderson, L., Evertson, C., & Brophy, J. (1979). An experimental study of effective teaching in first-grade reading groups. Elementary School Journal, 79, 193-223.

Anderson, O. (1974). Research on structure in teaching. Journal of Research in Science Teaching, 11, 219-230. doi:10.1002/tea.3660110306

Anderson, R., Greene, M., & Loewen, P. (1988). Relationships among teachers' and students' thinking skills, sense of efficacy, and student achievement. Alberta Journal of , 34, 148-165.

Antoniou, P. (2013). A longitudinal study investigating relations between stages of effective teaching, teaching experience, and teacher professional development approaches. The Journal of Classroom Interaction, 48, 25-40.

Antoniou, P., & Kyriakides, L. (2011). The impact of a dynamic approach to professional development on teacher instruction and student learning: Results from an experimental study. School Effectiveness and School Improvement, 22, 291-311. doi:10.1080/09243453.2011.577078

Antoniou, P., & Kyriakides, L. (2013). A dynamic integrated approach to teacher professional development: Impact and sustainability of the effects on improving teacher behaviour and student outcomes. Teaching and Teacher Education, 29, 1- 12. doi:10.1016/j.tate.2012.08.001

Antoniou, P., Kyriakides, L., & Creemers, B. (2015). The dynamic integrated approach to teacher professional development: rationale and main characteristics, Teacher Development, 19, 535-552. doi:10.1080/13664530.2015.1079550

Armento, B. (1977). Teacher behaviours related to student achievement on a social science concept test. Journal of Teacher Education, 28, 46-52.

Armor, D., Conroy-Oseguera, P., Cox M., King, N., McDonnell, L., Pascal, A. Pauly, E., & Zellman, G. (1976). Analysis of the school preferred reading programs in selected Los Angeles minority schools. (REPORT NO. R-2007-LAUSD). Santa Monica, CA: Rand Corporation. (ERIC Document Reproduction Service No. 130 243).

Asadullah, S. (2008). Effectiveness of bachelor of education program on secondary school mathematics in Bangladesh: A case study of Dhaka City (Unpublished Master’s thesis). Hiroshima University, Japan.

Ashton, P., & Webb, R. (1986). Making a difference: Teachers’ sense of efficacy and student achievement. White Plains, NY: Longman.

Ashton, P., Webb, R., & Doda, N. (1983). A study of teachers’ sense of efficacy (Report No. 400790075). Gainesville, FL: National Institute of Education.

Asian Development Bank (ADB). (2002). Technical assistance to the People’s Republic of Bangladesh for preparing the teaching quality improvement in secondary education project. Retrieved from http://www.adb.org/sites/default/files/project- document/71019/r178-02.pdf

215

Asian Development Bank (ADB). (2004). Report and recommendation of the President to the board of directors on a proposed loan to the People’s Republic of Bangladesh for the teaching quality improvement in secondary education project. Retrieved from http://www.adb.org/sites/default/files/project- document/71019/r178-02.pdf

Askew, M., Brown, M., Rhodes, V., Johnson, D., & William, W. (1997). Effective teachers of numeracy: Report of a study carried out for the teacher training agency. London: King’s College, University of London.

Ausubel, D. (1968). : A cognitive view. New York: Holt, Rinehart & Winston.

Ayres, P., Dinham, S., & Sawyer, W. (2001). Effective teaching and student independence at Grade 12. Paper presented at the Annual Meeting of the American Educational Research Association, Seattle, WA. .

Ayres, P., Dinham, S., & Sawyer, W. (2004). Effective teaching in the context of a grade 12 high-stakes external examination in New South Wales, Australia. British Educational Research Journal, 30, 141-165. doi:10/1080.01411920310001630008

Azigwe, J., Kyriakides, L., Panayiotou, A., & Creemers, B. (2016). The impact of effective teaching characteristics in promoting student achievement in Ghana. International Journal of Educational Development, 51, 51-61.

Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioural Statistics, 29, 37–65. doi: 10.3102/10769986029001037

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioural change. Psychological Review, 84, 191-215.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.

Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (pp. 307-337). Greenwich, CT: Information Age.

Bangladesh Bureau of Educational Information and Statistics (BANBEIS). (2014). Basic educational statistics, Bangladesh and Time series data. Retrieved from official website www.banbeis.gov.bd

Bangladesh Bureau of Statistics (BBS). (2002). Report of the labour force survey 1999- 2000. Dhaka, Bangladesh: Ministry of Planning, Government of Bangladesh.

Bangladesh Bureau of Statistics (BBS). (2016). Total population, Bangladesh. Retrieved from official website www.bbs.gov.bd

Barnes, D. (1974). Language, the learner and the school. London: Penguin Education.

216

Barry, R. (2010). Teaching effectiveness and why it matters. Marylhurst, OR: Marylhurst University.

Bennett, S. (1976). Teaching styles and pupil progress. London: Open Books.

Berg, C., & Clough, M. (1991). Hunter lesson design: The wrong one for science teaching. , 48, 73-78. Berliner, D. (1984). The half-full glass: a review of research on teaching. In P. Hosford (Ed.), Using what we know about teaching (pp. 51-77). Alexandria, VA: Association for Supervision and Curriculum Development.

Berliner, D. (1994). Expertise: The wonder of exemplary performances. In J. Mangieri & C. Block (Eds.), Creating powerful thinking in teachers and students: Diverse perspectives (pp. 161–186). Fort Worth, TX: Harcourt Brace College.

Berliner, D., Fisher, C., Filby, N., & Marliave, R. (1978). Executive summary of beginning teacher evaluation study. San Francisco, CA: Far West Regional Laboratory for Educational Research and Development.

Berman, P., McLaughlin, M., Bass, G., Pauly, E., & Zellman, G. (1977). Federal programs supporting educational change. Santa Monica, CA: The Rand Cooperation.

Biggs, J. (1999). Teaching for quality learning at university. Buckingham, UK: SRHE and Open University Press.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5, 7-74.doi: 10.1080/0969595980050102

Bloom, B., Engelhart, M., Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives. Handbook I: Cognitive domain. New York: McKay.

Bloom, K., & Shuell, T. (1981). Effects of massed and distributed practice on the learning and retention of second-language vocabulary. The Journal of Educational Research, 74, 245-248.doi:10.1080/00220671.1981.10885317

Borg, W. (1979). Teacher coverage of academic content and pupil achievement. Journal of Education Psychology, 71, 635-645. doi:10.1037/0022-0663.71.5.635

Borich, G. (1996). Effective teaching methods. New York: Macmillan.

Borich, G. (2015). Observation skills for effective teaching: research-based practice. New York: Routledge.

Bransford, J., & Donovan, M. (2005). Scientific inquiry and how people learn. In M. Donovan & J. Bransford (Eds.), How students learn: History, mathematics, and science in the classroom (pp. 397-420). Washington DC: National Academy Press.

Braun, H. (2005). Using student progress to evaluate teachers: A primer on value- added models. Policy information perspective. Princeton, NJ: Educational Testing

217

Service.

Brekelmans, M., van den Eeden, P., Terwel, J., & Wubbels, T. (1997). Student characteristics and learning environment interactions in mathematics and : A resource perspective. International Journal of Educational Research, 27, 283-292.doi:10.1016/S0883-0355(97)90010-0

Briggs, D., & Domingue, B. (2011). Due Diligence and the Evaluation of Teachers: A review of the value-added analysis underlying the effectiveness rankings of Los Angeles Unified School District teachers by the Los Angeles Times. Boulder, CO: National Education Policy Center. Retrieved from http://nepc.colorado.edu/publication/due-diligence

Brophy, J., & Evertson, C. (1976). Learning from teaching: A developmental perspective. Boston, MA: Allyn and Bacon.

Brophy, J., & Good, T. (1986). Teacher behaviour and student achievement, In M. Wittrock (Ed.), Third handbook of research on teaching (pp. 328-375). New York: Macmillan.

Brown, G., & Edmondson, R. (1984). Asking questions. In E.Wragg (Ed.), Classroom teaching skills (pp. 97-120). London: Croom Helm.

Brown, S. & Knight, P. (1994). Assessing learners in higher education. London: Routledge Falmer.

Brown, T. (2006). Confirmatory factor analysis for applied research. New York: Guildford Press

Bruner, J. (1964). The course of cognitive growth. American Psychologist, 19, 1-15. doi:10.1037/h0044160

Bruner, J. (1966). Toward a theory of instruction. Cambridge, MA: Belknap Press of Harvard University.

Buggey, L. (1971). A study of the relationship of classroom questions and social studies achievement of second grade children (Doctoral dissertation). University of Washington, Washington (University Microfilm No. 71-28385).

Burns, R. (1982). Self-concept development and education. London: Cassell.

Butler, D. (1998). The strategic content learning approach to promoting self-regulated learning: A summary of three studies. Journal of Educational Psychology, 90, 682-697. doi: 10.1037/0022-0663.90.4.682

Calderhead, J. (1989). Reflective teaching and teacher education. Teaching and Teacher Education, 5, 43-51.

Camburn, E., & Barnes, C. (2004). Assessing the validity of a language arts instruction log through triangulation. Elementary School Journal, 105, 49-73. doi:10.1086/428802

218

Camilli, G., & Hopkins, K. (1978). Applicability of chi-square to 2× 2 contingency tables with small expected cell frequencies. Psychological Bulletin, 85, 163- 167. doi:10.1037/0033-2909.85.1.163

Campbell, R., Kyriakides, L., Muijs, R., & Robinson, W. (2003). Differential teacher effectiveness: Towards a model for research and teacher appraisal. Oxford Review of Education, 29, 347-362.doi:10.1080/03054980307440

Canadian International Development Agency (CIDA). (2012). Project profile: Teaching quality improvement in secondary education. Retrieved from http://www.acdi- cida.gc.ca/cidaweb/cpo.nsf/projen/A032356001

Caprara, V., Barbaranelli, C., Steca, P., & Malone, S. (2006). Teachers' self-efficacy beliefs as determinants of job satisfaction and students' academic achievement: A study at the school level. Journal of , 44, 473–490. doi:10.1016/j.jsp.2006.09.001

Carle, A. (2009). Evaluating college students’ evaluations of a professor’s teaching effectiveness across time and instruction mode (online vs. face-to-face) using a multilevel growth modelling approach. Computers & Education, 53, 429-435. doi:10.1016/j.compedu.2009.03.001

Carlsen, W. (1991). Questioning in classrooms: A sociolinguistic perspective. Review of Educational Research, 61, 157-178. doi:10.3102/00346543061002157

Cashin, W. (1999). Student ratings of teaching: Uses and misuses. In P. Seldin (Ed.), Current practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions (pp. 25-44). Bolton, MA: Anker.

Centre for Policy Dialogue (CPD). (2001). Policy brief on “Education Policy”. Election 2001: National Policy Forum. Dhaka, Bangladesh: CPD.

Cepeda, N., Pashler, H., Vul, E., Wixted, J., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132, 354-380. doi:10.1037/0033-2909.132.3.354

Chowdhury, R., Choudhury, K., Nath, R., Ahmed, M., & Alam, M. (2000). Education Watch 2000, A question of quality: State of primary education in Bangladesh. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Clarke, T., Ayres, P., & Sweller, J. (2005). The impact of sequencing and prior knowledge on learning mathematics through spreadsheet applications. Research and Development, 53, 15-24. Retrieved from http://www.jstor.org/stable/30220438

Clegg, A. (1987). Why questions. In W. Wilen (Ed.), Questions, questioning techniques, and effective teaching (pp. 11-22). Washington, DC: National Education Association.

Clift, R., Houston, R., & Pugach, M. (Eds.), (1990). Encouraging reflective practice in education. An analysis of issues and programs. New York: Teachers College Press.

219

Cochran, W. (1954). Some methods for strengthening the common χ 2 tests. Biometrics, 10, 417-451. doi: 10.2307/3001616

Coe, R., Aloisi, C., Higgins, S., & Major, L. (2014). What makes great teaching? Review of the underpinning research. Project Report. London: Sutton Trust.

Cohen, L. Manion. L. & Morrison, K. (2007). Research methods in education (6th ed.). New York: Routledge.

Cohen, L. Manion. L. & Morrison, K. (2011). Research methods in education (7th ed.). New York: Routledge.

Coker, H., Medley, D., & Soar, R. (1980). How valid are expert opinions about effective teaching? Phi Delta Kappan, 62, 131-149.

Coladarci, T. (1992). Teachers' sense of efficacy and commitment to teaching. Journal of Experimental Education, 60, 323-337. doi:10.1080/00220973.1992.9943869

Cole, P. & Chan, L. (1987). Teaching principles and practices. New York: Prentice Hall.

Coleman, J., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F., & York, R. (1966). Equality of educational opportunity. Washington, DC: Government Printing Office

Coolican, H. (2009). Research methods and statistics in psychology. London: Hodder Education.

Cornelius-White, J. (2008). Learner-centered teacher-student relationships are effective: A meta-analysis. Review of educational research, 77, 113-143. doi: 10.3102/003465430298563

Costa, A., & Garmston, R. (1994). Cognitive coaching: A foundation for renaissance schools. , Norwood, MA: Christopher-Gordon Publishers, Inc..

Cotton, K. (1988). Classroom questioning. Portland, OR: Northwest Regional Educational Laboratory. . Retrieved from http://educationnorthwest.org/sites/default/files/ClassroomQuestioning.pdf

Cotton, K. (1989). Teaching questioning skills: Franklin Elementary School. Portland, OR: Northwest Regional Educational Laboratory. Retrieved from http://educationnorthwest.org/sites/default/files/TeachingQuestioningSkills.pdf

Council of Chief Officers and National Governors Association,Center for Best Practices (2010). Common core state standards initiative. Washington, DC: Author. Retrieved from http://www.corestandards.org/the-standards.

Cousins, J., & Walker, C. (2000). Predictors of educators’ valuing of systematic inquiry in schools. Canadian Journal of Program Evaluation, 15, 25-52.

Creemers, B. (1994). The effective school. London: Cassell.

220

Creemers, B. (1999). The effective teacher: what changes and remains. Asia Pacific Journal of Teacher Education & Development, 2, 51-63.

Creemers, B., & Kyriakides, L. (2006). Critical analysis of the current approaches to modelling educational effectiveness: The importance of establishing a dynamic model. School Effectiveness and School Improvement, 17, 347-366. doi:10.1080/09243450600697242

Creemers, B., & Kyriakides, L. (2008). The dynamics of educational effectiveness: A contribution to policy, practice and theory in contemporary schools. London: Routledge.

Creemers, B., & Kyriakides, L. (2010). Explaining stability and changes in school effectiveness by looking at changes in the functioning of school factors. School Effectiveness and School Improvement, 21, 409–427. doi:10.1080/09243453.2010.512795

Creemers, B., & Kyriakides, L. (2012). Improving quality in education: Dynamic approaches to school improvement. London: Routledge.

Creemers, B., & Kyriakides, L. (2015). Process-Product Research: A Cornerstone in Educational Effectiveness Research. Journal of Classroom Interaction, 50, 107- 119.

Creemers, B., & Kyriakides, L. (2015a). Developing, testing, and using theoretical models for promoting quality in education. School Effectiveness and School Improvement, 26, 102-119. doi:10.1080/09243453.2013.869233

Creemers, B., & Reezigt, G. (1996). School level conditions affecting the effectiveness of instruction. School effectiveness and school Improvement, 7, 197-228. doi:10.1080/0924345960070301

Creemers, B., & Reezigt, G. (1999). The role of school and classroom climate in elementary school learning environments. In H. Freiberg (Ed.), School climate: Measuring, improving and sustaining healthy learning environments (pp. 30-47). London: Falmer Press.

Creemers, B., Kyriakides, L., & Antoniou, P. (2012). Teacher professional development for improving quality of teaching. New York: Springer Science & Business Media.

Creemers, B., Kyriakides, L., & Sammons, P. (2010). Methodological advances in educational effectiveness research. London and New York: Routledge.

Creswell, J. (2002). Educational research: Planning, conducting, and evaluating quantitative. Saddle River, NJ: Prentice Hall.

Creswell, J. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. New Delhi, India: Sage.

Creswell, J. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Boston, MA: Pearson.

221

Creswell, J. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Los Angeles, CA: Sage.

Creswell, J., & Clark, V. (2011). Designing and conducting mixed methods research. Los Angeles, CA: Sage.

Croll, P. & Moses (1988). Teaching methods and time on task in junior classrooms. Educational Researcher, 30, 90-97. doi:10.1080/0013188880300202

Croll, P. (1996). Teacher-pupil interaction in the classroom. In P. Croll & N. Hastings (Eds.), Effective primary teaching (pp. 14-28). London: David Fulton.

Crooks, T. (1988). The impact of classroom evaluation on students. Review of Educational Research, 5, 438–481.doi:10.3102/00346543058004438

Cunningham, D. (1987). Outline of an educational semiotic. Journal of Semiotics, 5, 201-216. doi:10.5840/ajs19875216

Darling-Hammond, L. (2000). Teacher quality and student achievement: a review of state policy evidence. Education Policy Analysis Archives, 8, 1-44.

Darling-Hammond, L., Holtzman, D., Gatlin, S., & Heilig, J. (2005). Does teacher preparation matter? evidence about teacher certification, teach for America, and teacher effectiveness. Education Policy Analysis Archives, 13, 1-51.

Dart, B. (1994). Measuring constructivist learning environments in tertiary education. A paper presented at the annual conference of the Australian Association for Research in Education, Newcastle.

Davis, H. (2003). Conceptualizing the role and influence of student–teacher relationships on children’s social and cognitive development. Educational Psychologist, 38, 207–234.

De Corte, E. (2000). Marrying theory building and the improvement of school practice: A permanent challenge for instructional psychology. Learning and Instruction, 10, 249-266. doi:10.1016/S0959-4752(99)00029-8 den Brok, P., Brekelmans, M., & Wubbels, T. (2004). Interpersonal teacher behaviour and student outcomes. School effectiveness and school improvement, 15, 407- 442.doi:10.1080/09243450512331383262 den Brok, P., Brekelmans, M., Levy, J., & Wubbels, T. (2002). Diagnosing and improving the quality of teachers' interpersonal behaviour. International Journal of Educational Management, 16, 176-184.doi.:10.1108/09513540210432155

Denzin, N. (1997). Interpretive ethnography: Ethnographic practices for the 21st century. Thousand Oaks, CA: Sage.

Department for Education and Employment (DfEE) (1999). The structure of the literacy hour. Retrieved from http://www.standards.dfee.gov.uk/literacy/literacyhour

Dewey, J. 1933. How we think: A restatement of the relation of reflective thinking to the

222

educative process. New York: D. C. Heath & Co.

Dignath, C., & Buttner, G. (2008). Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacognition and Learning, 3, 231-264.

Dillon, J. (2004). Questioning and teaching: A manual of practice. Wipf and Stock Publishers.

Directorate of Primary Education (DPE). (2012). Bangladesh primary education annual sector performance report [ASPR-2012]. Dhaka, Bangladesh: Ministry of Primary and Mass Education. Bangladesh.

Directorate of Primary Education (DPE). (2014). Annual primary school census 2014, Dhaka, Bangladesh: Ministry of Primary and Mass Education. Bangladesh.

Directorate of Primary Education (DPE). (2014a). Statistics of primary terminal examination (PTE) 2009-2014. Retrieved from official website: www.dpe.gov.bd

Donovan, J., & Radosevich, D. (1998). The moderating role of goal commitment on the goal difficulty–performance relationship: A meta-analytic review and critical reanalysis. Journal of Applied Psychology, 83, 308-315. doi:10.1037/0021- 9010.83.2.308

Doran, H., & Izumi, L. (2004). Putting education to the test: A value-added model for California. San Francisco: Pacific Research Institute.

Douglas, S., & Craig, C. (2007). Collaborative and iterative translation: An alternative approach to back translation. Journal of International Marketing, 15, 30-43. doi:10.1509/jimk.15.1.030

Dowden, T. (2007). Relevant, challenging, integrative and exploratory curriculum design: Perspectives from theory and practice for middle level schooling in Australia. The Australian Educational Researcher, 34, 51-71. Dowell, L. (1975). The effect of a competitive and cooperative environment on the comprehension of a cognitive task. Journal of Educational Research, 68, 274- 276. doi:10.1080/00220671.1975.10884769

Doyle, W. (1986) Classroom organisation and management. In M. Wittrock (Ed.), Handbook of Research on Teaching (pp. 392-431). New York: Macmillan.

Driscoll, M. (2005). Psychology of Learning for Instruction. Toronto, ON: Pearson Drury, D., & Doran, H. (2003). The value of value-added analysis. Policy research brief. 3, 1-4. Alexandria, VA: National School Boards Association.

Duffy, T., & Jonassen, D. (1992). Constructivism and the technology of instruction: A conversation. Hillsdale, NJ: Erlbaum.

Dunkin, M. & Biddle, B (1974). The study of teaching. Oxford, England: Holt, Rinehart & Winston.

223

Edwards, W., Malik, S., & Haase, J. (2010). Observations and Suggested Literacy Activities for Room to Read © Bangladesh. International Reading Association.

Efron, S., & Ravid, R. (2013). Action research in education: A practical guide. New York, NY: Guilford Press.

Elawar, M., & Corno, L. (1985). A factorial experiment in teachers' written feedback on student homework: Changing teacher behaviour a little rather than a lot. Journal of Educational Psychology, 77, 162-173. doi:10.1037/0022-0663.77.2.162

Elliott, A. & Woodward, W. (2007). Statistical analysis quick reference guidebook: With SPSS examples. Thousand Oaks, CA: Sage.

Emmer, E., & Evertson, C. (1981). Synthesis of research on classroom management. Educational leadership, 38, 342-347.

Emmer, E., & Evertson, C. (2012).Classroom management for middle and high school teachers. Boston, MA:Pearson.

Emmer, E., Evertson, C., & Anderson, L. (1980). Effective classroom management at the beginning of the school year. Elementary School Journal, 80, 219-231. Retrieved from http://www.jstor.org/stable/1001461

Ericsson, K., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725-747. doi:10.1037/0003-066X.49.8.725

Ericsson, K., & Lehmann, A. (1996). Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annual Review of Psychology, 47, 273- 305. doi:10.1146/annurev.psych.47.1.273

Ericsson, K., Krampe, R., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363-406. doi:10.1037/0033-295X.100.3.363

Evertson, C., Anderson, C., Anderson, L., & Brophy, J. (1980). Relationships between classroom behaviours and student outcomes in junior high mathematics and English classes. American Educational Research Journal, 17, 43-60. doi:10.3102/00028312017001043

Fenwick, T. (2001). Using student outcomes to evaluate teaching: A cautious exploration. New Directions for Teaching and Learning, 2001(88), 63-74. doi:10.1002/tl.38

Field, P., & Morse, J. (1989). Nursing research: the application of qualitative methods. London: Chapman and Hall.

Filmer, D., & Pritchett, L. (1999). The effect of household wealth on educational attainment: evidence from 35 countries. Population and Development Review, 25, 85-120.

Fisher, C., Berliner, D., Filby, N., Marliave, R., Cahen, L., & Dishaw, M. (1980). Teaching behaviours academic learning time, and student achievement: An

224

overview. In C. Denham & A. Lieberman (Eds.), Time to learn (pp. 7-32). Washington, DC: National Institute of Education.

Fives, H. (2003). What is teacher efficacy and how does it relate to teachers’ knowledge? A theoretical review. Paper presented at the American Educational Research Association Annual Conference, Chicago.

Flanders, N. (1970). Analyizing teaching behaviour. Oxford, England: Addison-Wesley

Fowler, F. (2009). Survey research methods. Thousand Oaks, CA: Sage.

Fraser, B. (1986). Classroom environment. London: Croom Helm.

Fraser, B. (1991). Two decades of classroom environment research. In B. Fraser & J. Walberg (Eds.), Educational Environments: Evaluation, Antecedents and Consequences (pp. 3-27). Elmsford, NY: Pergamon Press

Fraser, B., Walberg, H., Welch, W., & Hattie, J. (1987). Syntheses of educational productivity research. International Journal of Education, 11, 145-252. doi:10.1016/0883-0355(87)90035-8

Freiberg, H. & Driscoll, A. (2000). Universal teaching strategies, Boston: Allyn & Bacon.

Freiberg, H., & Stein, T. (1999). Measuring, improving and sustaining healthy learning environments. In H. Freiberg (Ed.), School climate: measuring, improving and sustaining healthy learning environments (pp. 11-29). London: Falmer Press.

Frey, B. (2016). There's a stat for that! what to do & when to do it. Thousand Oaks, CA: Sage.

Gall, M. (1970). The use of questions in teaching. Review of Educational Research, 40, 707-721.

Gall, M. (1984). Synthesis of research on teachers' questioning. Educational Leadership, 42, 40-47.

Gall, M., & Rhody, T. (1987). Review of research on questioning techniques. In W. Wilen (Ed.), Questions, questioning techniques. and Effective teaching (pp. 23- 48). Washington, DC: National Education Association.

Gall, M., Ward, B., Berliner, D., Cahen, L., Winne, P., Elashoff, J., & Stanton, G. (1978). Effects of questioning techniques and recitation on student learning. American Educational Research Journal, 15, 175-199.

Galton, M. (1987). An ORACLE Chronicle: A decade of classroom research. Teaching and Teacher Education 3, 299-313. doi:10.1016/0742-051x(87)90022-9

Galton, M., & Simon, B. (Eds.). (1980). Progress and performance in the primary classroom. London: Routledge and Kegan Paul.

Galton, M., Simon, B., & Croll, P. (1980). Inside the primary classroom. London:

225

Routledge & Kegan Paul.

Geijsel, F., Sleegers, P., Stoel, R., & Krüger, M. (2009). The effect of teacher psychological and school organizational and leadership factors on teachers' professional learning in Dutch schools. Elementary School Journal, 109, 406- 427. doi:10.1086/593940,

General Economics Division (GED). (2009). Millennium development goals needs assessment & costing 2009-2015 Bangladesh. Dhaka, Bangladesh: Planning Commission, Government of the People's Republic of Bangladesh. Retrieved from www.plancomm.gov.bd/mdg-needs-assessment-and-costing-report-2009- 2015

General Economics Division (GED). (2015). Millennium development goals: Bangladesh progress report 2015. Dhaka, Bangladesh: Planning Commission, Government of the People's Republic of Bangladesh. Retrieved from http://www.plancomm.gov.bd/wp-content/uploads/2016/03/Final_Post-2015- Development-Agenda_Bangladesh.pdf

Getzels, J. & Jackson, P. (1963). The teacher’s personality and characteristics. In N.L. Gage (Ed.), Handbook of research on teaching (pp. 506-582). Chicago: Rand McNally.

Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569-582.

Glass, G. (2005). Teacher characteristics. In A. Molnar (Ed.), School reform proposals: The research evidence (Chapter 8). Charlotte, NC: Information Age.

Gliner, J., Morgan, G., & Leech, N. (2009). Research methods in applied settings: An integrated approach to design and analysis. New York: Routledge.

Goddard, R., Hoy, W., & Woolfolk Hoy, A. (2000). Collective teacher efficacy: Its meaning, measure, and effect on student achievement. American Educational Research Journal, 37, 479–507.

Goe, L. (2007). The link between teacher quality and student outcomes; A research synthesis. Washington, DC: National Comprehensive Centre for Teacher Quality.

Goe, L. (2008). Key issue: Using value-added models to identify and support highly effective teachers. Washington, DC: National Comprehensive Centre for Teacher Quality.

Goe, L., Bell, C., & Little, O. (2008). Approaches to evaluating teacher effectiveness: A research synthesis. Washington, DC: National Comprehensive Centre for Teacher Quality.

Goe, L., Holdheide, L., & Miller, T. (2014). Practical guide to designing comprehensive teacher evaluation systems: A tool to assist in the development of teacher evaluation systems (Rev. ed.). Washington, DC: American Institutes for Research, Center on Great Teachers and Leaders. Retrieved from http://www.gtlcenter.org/sites/default/files/docs/practicalGuideEvalSystems.pdf

226

Goh, S. (1994). Interpersonal teacher behaviour, classroom climate and student outcomes in primary mathematics classes in Singapore (Unpublished doctoral dissertation). Curtin University of Technology, Perth.

Goldhaber, D., & Brewer, D. (1997). Why don't schools and teachers seem to matter? Assessing the impact of unobservable on educational productivity. Journal of Human Resources, 32, 505-523.

Goldstein, H & Sammons, P (1997) The influence of secondary and junior schools on sixteen year examination performance: A cross-classified multilevel analysis, School Effectiveness & School Improvement, 8, 219-230.

Good, T. (2010). Forty years of research on teaching 1968-2008: What do we know now that we didn’t know then? In R. Marzano (Ed.), On excellence in teaching (pp. 31-62). Bloomington, IN: Solution Tree.

Good, T., & Brophy, J. (1986). Educational psychology: A realistic approach. New York: Longman.

Good, T., & Grouws, D. (1977). Teaching Effects: A Process-Product Study in Fourth Grade Mathematics Classrooms. Journal of Teacher Education, 28, 49-54.

Good, T., & Grouws, D. (1979a). Experimental study of mathematics instruction in elementary schools (Final report). Columbia, MO: Centre for the Study of Social Behaviour, University of Missouri.

Good, T., & Grouws, D. (1979b). The Missouri m Mathematics effectiveness project: An experimental study in fourth grade classrooms. Journal of Educational Psychology, 71, 355-362.

Good, T., Grouws, D., & Beckerman, T. (1978). Curriculum pacing: Some empirical data in mathematics. Journal of Curriculum Studies, 10, 75-83. doi:10.1080/0022027780100106

Good, T., Grouws, D., & Ebmeier, M. (1983). Active mathematics teaching. New York: Longman.

Gorard, S. & Taylor, C. (2004). Combining methods in educational and social research. London: Open University Press.

Graham, G., & Heimerer, E. (1981). Research on teacher effectiveness: A summary with implications for teaching. Quest, 33, 14-25. doi:10.1080/00336297.1981.10483718

Gravetter, F., & Wallnau, L. (2013). Essentials of statistics for the behavioural sciences. Belmont, CA: Wadsworth, Cengage Learning.

Gray, J., Hopkins, D., Reynolds, D., Wilcox, B., Farrell, S., & Jesson, D. (2000). Improving schools' performance and potential. Buckingham, UK: Open University Press.

Grieve, A. (2010). Exploring the characteristics of ‘teachers for excellence’: Teachers’

227

own perceptions. European Journal of Teacher Education, 33, 265-277. doi:10.1080/02619768.2010.492854

Griffin, G., & Barnes, S. (1986). Using research findings to change school and classroom practice: results of an experimental study, America Educational Research Journal, 23, 572- 586. doi:10.3102/00028312023004572

Groschener, A., Seidel, T., & Shavelson, R. (2013). Methods for studying teacher and teaching effectiveness. In J. Hattie & M. Anderman (Eds.), International guide to student achievement (pp. 240-242). New York: Routledge.

Guarino, C., Hamilton, L., Lockwood, J., & Rathbun, A. (2006). Teacher qualifications, instructional practices, and reading and mathematics gains of kindergartners (NCES 2006-031). Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Guskey, T. (1981). Measurement of responsibility teachers assume for academic successes and failures in the classroom. Journal of Teacher Education, 32, 44-51.

Guskey, T. (1984). The influence of change in instructional effectiveness upon the affective characteristics of teachers. American Educational Research Journal, 21, 245-259.

Guskey, T. (1988). Teacher efficacy, self-concept, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 4, 63-69. doi:10.1016/0742-051X(88)90025-X

Guskey, T. (2013). Defining student achievement. In J. Hattie & M. Anderman (Eds.), International guide to student Achievement (pp. 3-6). New York: Routledge.

Guzzetti, B., Snyder, T., & Glass, G. (1992). Promoting conceptual change in science: Can texts be used effectively? Journal of Reading, 35, 642-649.

Hafner, A. (1993). Teaching-method scales and mathematics-class achievement: What works with different outcomes? American Educational Research Journal, 30, 71- 94. doi:10.3102/00028312030001071

Hair, J., Black, W., Babin, B., & Anderson, R. (2014). Multivariate data analysis (7th ed.), Harlow: Pearson Education Ltd.

Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006). Multivariate data analysis. Upper Saddle River, NJ: Pearson Prentice Hall.

Hall, L. (1988). The effects of cooperative learning on achievement: A meta-analysis. Unpublished Ed.D., University of Georgia, GA.

Hannafin, M., Hill, J., & Land, S. (1997). Student-centered Learning and Interactive Multimedia: Status, Issues, and Implications. Contemporary Education, 68, 94-97.

Hanushek, A., Kain, F., O’Brien, M., & Rivkin, G. (2005). The market for teacher quality. Discussion paper no. 04-25, Stanford Institute for Economic Policy Research (SIEPR), Stanford University.

228

Harris, D., & Saas, T. (2007). What makes for a good teachers and who can tell? Washington, DC: Urban Institute.

Hattie, J. (2003). Teachers make a difference: What is the research evidence? Paper presented at the Australian Council for Educational Research Annual Conference on Building Teacher Quality, Melbourne.

Hattie, J. (2009). Visible Learning. A synthesis of over 800 meta‐analyses relating to achievement. Oxon, OX: Routledge.

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Oxon, OX: Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112. doi: 10.3102/003465430298487

Hattie, J., & Yates, G. (2014). Using feedback to promote learning. In A. Benassi, E. Overson, & M. Hakala (Eds.), Applying science of learning in Education Infusing Psychological science into the curriculum (pp. 45-58). Retrieved from http://teachpsych.org/ebooks/asle2014/index.php

Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66, 99- 136.doi:10.3102/00346543066002099

Haystead, M., & Marzano, R. (2009). Meta-analytic synthesis of studies conducted at Marzano Research Laboratory on instructional strategies. Englewood, CO: Marzano Research Laboratory.

Helmke, A., & Schrader, F. (1988). Successful student practice during seatwork: Efficient management and active supervision not enough. Journal of Educational Research, 82, 70-76.

Herman J., & Klein C. (1996) Evaluating equity in alternative assessment: an illustration of opportunity-to-learn issues, Journal of Educational Research, 89, 246-256. doi: 10.1080/00220671.1996.9941209

Hershberg, T., Simon, V. & Lea-Kruger, B. (2004). Measuring what matters. American School Board Journal. 191, 27-31.

Hill, H., Rowan, B., & Ball, D. (2005). Effects of teacher’s mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42, 371-406.

Hill, P., & Goldstein, H. (1998). Multilevel modelling of educational data with cross classification and missing identification for units. Journal of Educational and Behavioural Statistics, 23, 117-128.

Hmelo-Silver, C. (2004). Problem-based learning: What and how do students learn?. Educational Psychology Review, 16, 235-266.

Holmes-Smith, P. (2001). Introduction to structural equation modelling using

229

LISREL. ACSPRI-Winter Training Program, Perth.

Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling: Guidelines for determining model fit. Journal of Business Research Methods, 6, 53–60.

Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling: A Multidisciplinary Journal, 6, 1-55.doi:10.1080/10705519909540118

Hunkins, F. (1976). Involving students in questioning. Boston: Allyn and Bacon.

Hunt, C. (2009). Teacher effectiveness: A review of the international literature and its relevance for improving education in Latin America. Working Paper Series no. 43. Washington, DC: Partnership for Educational Revitalization in the Americas (PREAL).

Huq, M., & Rahman, P. (2008). Gender disparities in secondary education in Bangladesh. International Education Studies, 1, 115-128. doi: http://dx.doi.org/10.5539/ies.v1n2p115

Hyman, R. (1979). Strategic questioning. Englewood Cliffs, New Jersey: Prentice Hall.

Irving, S. (2004). The development and validation of a student evaluation instrument to identify highly accomplished mathematics teachers (Doctoral dissertation). The University of Auckland, New Zealand.

Jacob, B., & Lefgren, L. (2008). Can principals identify effective teachers? Evidence on subjective performance evaluation in education. Journal of Labour Economics, 26, 101-136. doi:10.1086/522974

Johnson, D., & Johnson, R. (1999). Making cooperative learning work. Theory into Practice, 38, 67-73.doi:10.1080/00405849909543834

Johnson, D., & Johnson, R. (1989). Cooperation and competition: Theory and research. Edina, Minnesota: Interaction Book Company.

Johnson, D., Maruyama, G., Johnson, R., Nelson, D., & Skon, L. (1981). Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychological bulletin, 89, 47-62.

Johnson, R., & Onwuegbuzie, A. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33, 14-26. doi: 10.3102/0013189X033007014

Jöreskog, K. & Sörbom, D. (1993). LISREL 8: Structural Equation Modelling with the SIMPLIS Command Language. Chicago, IL: Scientific Software International Inc.

Kane, T., Rockoff, J., & Staiger, D. (2008). What does certification tell us about teacher effectiveness? Evidence from New York City. Economics of Education Review, 27, 615-631.

230

Kane, T., Taylor, E., Tyler, J., & Wooten, A. (2011). Identifying effective classroom practices using student achievement data. Journal of Human Resources, 46, 587- 613. doi:10.3368/jhr.46.3.587

Kauchak, D. & Eggen, P. (1998). Teaching and Learning: Research-based methods, Boston: Allyn and Bacon.

Kerlinger, F. (1986). Foundations of behavioural research. Orlando: Florida, Holt, Rinehart and Winston.

Khanum, F. (2014). Learners learning style preferences and teachers awareness in the context of higher secondary level in Bangladesh. Global Journal of Human-Social Science Research, 14, 1-7.

Killen, R. (2013). Effective teaching strategies. South Melbourne, Victoria: Cengage Learning Australia.

Klassen, A., Creswell, J., Clark, V., Smith, K., & Meissner, H. (2012). Best practices in mixed methods for quality of life research. Quality of Life Research, 21, 377-380. doi:10.1007/s11136-012-0122-x

Klassen, R., Bong, M., Usher, E., Chong, W., Huan, V., Wong, I., & Georgiou, T. (2009). Exploring the validity of a teachers’ self-efficacy scale in five countries. Contemporary Educational Psychology, 34, 67-76. doi:10.1016/j.cedpsych.2008.08.001

Kline, R. (2005). Principles and practice of structural equation modelling. New York: Guilford Press.

Knobloch, N., & Whittington, M. (2002). Novice teachers' perceptions of support, teacher preparation quality, and student teaching experience related to teacher efficacy. Journal of Vocational Education Research, 27, 331-341. doi:10.5328/JVER27.3.331

Kolawole, E. (2008). Effects of competitive and cooperative learning strategies on academic performance of Nigerian students in mathematics. Educational Research and Reviews, 3, 33-37.

Korthagen, F. & Kessels, J. (1999). Linking theory and practice: Changing the pedagogy of teacher education. Educational Researcher, 28, 4-17. doi: 10.3102/0013189X028004004

Kounin, J. (1970). Discipline and group management in classrooms. Huntington, NY: R.E. Krieger.

Krathwohl, D., Bloom, B., & Masia, B. (1964). Taxonomy of educational objectives: II: Handbook II, Affective domain. New York: McKay.

Kumar, R. (2011). Research methodology: A step-by-step guide for beginners (3rd ed.). Los Angeles: Sage.

Kumar, R. (2014). Research methodology: A step-by-step guide for beginners (4th ed.).

231

Los Angeles: Sage.

Kupermintz, H. (2003). Teacher effects and teacher effectiveness: A validity investigation of the Tennessee value added assessment system. and Policy Analysis, 25, 287–298.

Kvale, S. (1996). Interviews: An introduction to qualitative research interviewing, Thousand Oaks, CA: Sage.

Kvale, S. (2007). Doing interviews. Thousand Oaks, CA: Sage.

Kyriacou, C. (2009). Effective teaching in schools, theory and practice (3rd ed.). Cheltenham, UK: Nelson Thornes.

Kyriakides, L. (2005). Extending the comprehensive model of educational effectiveness by an empirical investigation. School Effectiveness and School Improvement, 16, 103-152. doi:10.1080/09243450500113936

Kyriakides, L., & Creemers, B. (2008). Using a multidimensional approach to measure the impact of classroom level factors upon student achievement: A study testing the validity of the dynamic model. School Effectiveness and School Improvement, 19, 183–205. doi:10.1080/09243450802047873

Kyriakides, L., & Creemers, B. (2009). The effects of teacher factors on different outcomes: Two studies testing the validity of the dynamic model. Effective Education, 1, 61–85. doi: 10.1080/19415530903043680

Kyriakides, L., Archambault, I., & Janosz, M. (2013). Searching for stages of effective teaching: A study testing the validity of the dynamic model in Canada. The Journal of Classroom Interaction, 48, 11-24.

Kyriakides, L., Campbell, R., & Christofidou, E. (2002). Generating criteria for measuring teacher effectiveness through a self-evaluation approach: A complementary way of measuring teacher effectiveness. School Effectiveness and School Improvement, 13, 291-325. doi.10.1076/sesi.13.3.291.3426

Kyriakides, L., Christoforou, C., & Charalambous, C. (2013a). What matters for student learning outcomes: A meta-analysis of studies exploring factors of effective teaching? Teaching and Teacher Education, 36, 143–152. doi:10.1016/j.tate.2013.07.010.

Kyriakides, L., Creemers, B., & Antoniou, P. (2009). Teacher behaviour and student outcomes: Suggestions for research on teacher training and professional development. Teaching and Teacher Education, 25, 12–23. doi:10.1016/j.tate.2008.06.001

Kyriakides, L., Creemers, B., Antoniou, P., & Demetriou, D. (2010). A synthesis of studies searching for school factors: Implications for theory and research. British Educational Research Journal, 36, 807-830.doi:10.1080/01411920903165603

Lampert, M. (1988). What can research on teacher education tell us about improving quality in mathematics education? Teaching and Teacher Education, 4, 157- 170.

232

Leary, T. (1957). Interpersonal diagnosis of personality: A functional theory and methodology for personality evaluation. Oxford, UK: Ronald Press

Leech, N., & Onwuegbuzie, A. (2009). A typology of mixed methods research designs. Quality &Quantity, 43, 265-275. doi:10.1007/s11135-007-9105-3

Leedy, P., & Ormrod, J. (2005). Practical research planning and design. New Jersey, USA: Pearson.

Leigh, A. (2010). Estimating teacher effectiveness from two-year changes in students’ test scores. Economics of Education Review. 29, 480–488. doi:10.1016/j.econedurev.2009.10.010

Leven, T. & Long, R. (1981). Effective Instruction. Washington, DC: Association of Supervision and Curriculum Development.

Levy, J., den Brok, P., Wubbels, T., & Brekelmans, M. (2003). Significant variables in students’ perceptions of teacher interpersonal communication styles. Learning Environments Research, 6, 5–36.

Levy, J., Wubbels, T., & Brekelmans, M. (1992). Student and teacher characteristics and perceptions of teacher communication style. Journal of Classroom Interaction, 27, 23–29.

Lincoln, Y., & Guba, E. (1985) Naturalistic inquiry. Beverly Hills, CA: Sage.

Linn, R. (2001). The design and evaluation of and accountability systems. CSE Technical Report 539. University of California, Los Angeles.

Little, O., Goe, L., & Bell, C. (2009). A practical guide to evaluating teacher effectiveness. Washington, DC: National Comprehensive Centre for Teacher Quality.

Locke, E. & Latham, G. (1990). A theory of goal setting and performance. Englewood Cliffs, NJ: Prentice Hall.

Long, M., & Sato, C. (1983). Classroom foreigner talk discourse: Forms and functions of teachers’ questions. In H. Seliger & M. Long (Eds.), Classroom oriented research in second language acquisition (pp. 268–285). Rowley, MA: Newbury House

Lutfuzzaman, A., Muhammad, N., & Hasan, A. (2006). Developing a quality mathematics education culture in Bangladesh. Bangladesh Education Journal, 5, 25-34.

Mason, L. (2007). Introduction: Bridging the cognitive and sociocultural approaches in research on conceptual change: Is it feasible?. Educational Psychologist, 42, 1-7.

McCaffrey, D. , Lockwood, J., Koretz, D. , & Hamilton, L. (2003). Evaluating value- added models for teacher accountability. Santa Monica, CA: the RAND Corporation. Retrieved from

233

http://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.p df

McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T.A., & Hamilton, L. (2004). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29, 67–101.

McCaffrey, D., & Anthony, E. (2004). Can teacher quality be effectively assessed? National Board certification as a signal of effective teaching. Seattle, WA: Center on Reinventing Public Education.

McCaffrey, D., Koretz, D., Lockwood, J., & Hamilton, L. (2004). The promise and peril of using value-added modeling to measure teacher effectiveness. Santa Monica, CA: the RAND Corporation. Retrieved from http://www.rand.org/pubs/research_briefs/2005/RAND_RB9050.pdf

McDonald, R., & Ho, M. (2002). Principles and practice in reporting structural equation analyses. Psychological Methods, 7, 64-82. doi:10.1037/1082-989X.7.1.64

McIntosh, C. (2006). Rethinking fit assessment in structural equation modelling: A commentary and elaboration on Barrett. Personality and Individual Differences, 42, 859-867.doi:10.1016/j.paid.2006.09.020

Measures of Effective Teaching (MET) project. (2010). Student perceptions and the MET project. Bill & Melinda Gates Foundation. Retrieved from http://www.METproject.org

Measures of Effective Teaching (MET) project. (2013). Measures of effective teaching project releases final research report. Bill & Melinda Gates Foundation. Retrieved from http://www.gatesfoundation.org

Medley, D., & Cooker, H. (1987). The accuracy of principals’ judgments of teacher performance. Journal of Educational Research, 80, 242-247. doi:10.1080/00220671.1987.10885759

Medwell, J., Wray, D., Poulson, L., & Fox, R. (1998). Effective teachers of literacy: A report of a research project commissioned by the teacher training agency, Exeter, UK: University of Exeter.

Mertler, C. (2012). Action research: Improving schools and empowering educators. Thousand Oaks, CA: Sage.

Miles, M., & Huberman, A. (1994). An expanded sourcebook: Qualitative data analysis. Thousand Oaks, CA: Sage.

Miles, M., Huberman, A., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook. Los Angeles, CA: Sage.

Ministry of Education (MoE) (2004). Development of Education, National Report of Bangladesh. Retrieved from http://www.ibe.unesco.org/International/ICE47/English/Natreps/reports/banglades h.pdf

234

Ministry of Education (MoE) (2016). National education policy 2010, Bangladesh. Retrieved from www.moe.gov.bd

Monk, D. (1994). Subject matter preparation of secondary mathematics and science teachers and student achievement. Economics of Education Review, 13, 125–145. doi:10.1016/0272-7757(94)90003-5

Morgan, G., Leech, N., Gloeckner, G., & Barrett, K. (2012). IBM SPSS for introductory statistics: Use and interpretation. New York: Routledge.

Morrison, K. (1993). Planning and accomplishing school-centred evaluation. Dereham, UK: Peter Francis.

Mortimore, P., Sammons, P., Stoll, L., Lewis, D., & Ecob, R. (1988). School matters: The junior years. Somerset, UK: Open Books.

Moser, C. &Kalton, G. (1989). Social investigation. Eldershot, UK: Gower.

Muijs, D., & Reynolds, D. (2000). School effectiveness and teacher effectiveness in Mathematics: Some preliminary findings from the evaluation of the mathematics enhancement programme (primary). School Effectiveness and School Improvement, 11, 273 – 303.doi:10.1076/0924-3453(200009)11:3;1-G; FT273

Muijs, D., & Reynolds, D. (2001). Being or doing: The role of teacher behaviours and beliefs in school and teacher effectiveness in mathematics, a SEM analysis. Paper presented at the annual meeting of the American Educational Research Association, Seattle, WA.

Muijs, D., & Reynolds, D. (2001a). Effective teaching: Evidence and practice. London: Sage.

Muijs, D., & Reynolds, D. (2005). Effective teaching: evidence and practice. London: Sage.

Muijs,D., Kyriakides,L., Werf,G., Creemers, B., Timperley, H., & Earl, L., (2014). State of the art – teacher effectiveness and professional learning. School Effectiveness and School Improvement 25, 231-256. doi: 10.1080/09243453.2014.885451

Murphy, D. (2012). Where is the value in value-added modeling. Pearson Education, Inc. Retrieved from http://educatoreffectiveness.pearsonassessments.com/downloads/viva_v1.pdf

Murray, S., & Udermann, B. (2003). Massed versus distributed practice: Which is better? CAHPERD Journal, 1, 19-22.

Nath, S. (2006). An exploration of the students' assessment at the beginning of secondary education. Bangladesh Education Journal, 5, 9-24.

Nath, S., & Chowdhury, A. (2008). Education watch 2008, Progress made, challenges remained: State of primary education in Bangladesh. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

235

Nath, S., Chowdhury, A., Ahmed, M., & Choudhury, R. (2014). Education watch 2014, Whither grade vs examination? An assessment of primary education completion examination in Bangladesh. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Nath, S., Haq, N., Begum,S., Ullah, A., Sattar, A., & Chowdhury, A. (2007). Education watch 2007: The state of secondary education quality and equity challenges. Dhaka, Bangladesh: Campaign for Popular Education (CAMPE).

Nuthall, G., & Church, R. (1973). Experimental studies of teaching behaviour. In G. Chanan (Ed.), Towards a science of teaching (pp. 9–25). Windsor Berkshire, UK: National Foundation for Educational Research.

Ojo, A., & Egbon, F. (2005). Effects of classroom goal structures on students’ academic achievement in mathematics among senior secondary school. Ikere Journal of Science Teacher, 2, 19–30.

Oppenheim, A. (1992). Questionnaire design, interviewing and attitude measurement. London: Pinter. doi: 10.1002/casp.2450040506

Ozerk, K. (2001). Teacher‐student verbal interaction and questioning, class size and bilingual students' academic performance, Scandinavian Journal of Educational Research, 45, 353-367. doi: 10.1080/00313830120096761

Palardy, G., & Rumberger, R. (2008). Teacher effectiveness in first grade: The importance of background qualifications, attitudes, and instructional practices for student learning. Educational Evaluation and Policy Analysis, 30, 111-140. doi:10.3102/0162373708317680

Palinscar, A., & Brown, A. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117-175. doi:10.1207/s1532690xci0102_1

Panayiotou, A., Kyriakides, L., Creemers, B., McMahon, L., Vanlaar, G., Pfeifer, M., Rekalidou, G., & Bren, M. (2014). Teacher behaviour and student outcomes: Results of a European study. Educational Assessment, Evaluation and Accountability, 26, 73-93. doi:10.1007/s11092-013-9182-x

Paris, S., & Paris, A. (2001). Classroom applications of research on self-regulated learning. Educational psychologist, 36(2), 89-101.

Patton, M. (1980). Qualitative Evaluation Methods. Beverly Hills, CA: Sage.

Peterson, K. (2004). Research on school teacher evaluation. National Association of Secondary School Principals Bulletin, 88, 60-79. doi: 10.1177/019263650408863906

Phellas, C., Bloch, A., & Seale, C. (2011). Structured methods: Interviews, questionnaires and observation. In C. Seale (Ed.), Researching society and culture (pp. 182-202). London: Sage.

Phillips, D. (1997). How, why, what, when and where: perspectives on constructivism

236

in psychology and education. Issues in Education, 3, 151-194.

Piaget, J. (2001). The child’s conception of physical causality. New Brunswick. NJ: Transaction Publishers.

Porter, A., Kirst, M., Osthoff, E, Smithson, J., & Schneider, S. (1993). Reform up close: An analysis of high school mathematics and science classrooms. Final report. School of Education Research, University of Wisconsin-Madison.

Poulou, M. (2007). Personal teaching efficacy and its sources: Student teachers’ perceptions. Educational Psychology, 27, 191-218. doi:10.1080/01443410601066693

Pressley, M. (1999). Self-regulated comprehension processing and its development through instruction. In L. Gambrell, L. Morrow, S. Neuman, & M. Pressley (Eds.), Best practices in literacy instruction (pp. 90-97). New York: Guilford Press.

Qin, Z., Johnson, D., & Johnson, R. (1995). Cooperative versus competitive efforts and problem solving. Review of Educational Research, 65, 129-143. doi: 10.3102/00346543065002129

Rahman, M., Hamzah, M., Meerah, T. & Rahman, M. (2010). Historical development of secondary education in Bangladesh: Colonial period to 21st century. International Education Studies, 3, 114-125.

Ramaprasad, A. (1983). On the definition of feedback. Behavioural Science, 28, 4–13. doi:10.1002/bs.3830280103

Ramirez, J., & Merino, B. (1990). Classroom talk in English immersion, early-exit and late-exittransitional programs. In R. Jacobson & C. Faltis (Eds.), Language distribution issues in bilingual schooling (pp. 61–103). Clevedon, England: Mutltilingual Matters. Retrieved from http://www.jstor.org/stable/27540295

Raykov, T. (1998). Coefficient alpha and composite reliability with interrelated nonhomogeneous items. Applied psychological measurement, 22, 375-385.

Reynolds, D. (1995). The effective school: An inaugural lecture. Evaluation & Research in Education, 9, 57-73.doi:10.1080/09500799509533374

Reynolds, D., & Muijs, D. (1999). The effective teaching of mathematics: A review of research. School Leadership & Management, 19, 273-288. doi:10.1080/13632439969032

Rice, J. (1999). The impact of class size on instructional strategies and the use of time in high school mathematics and science courses. Educational Evaluation and Policy Analysis, 21, 215-229.

Riggs, I., & Enochs, L. (1990). Toward the development of an elementary teacher's science teaching efficacy belief instrument. , 74, 625-637. doi: 10.1002/sce.3730740605

237

Rivkin, S., Hanushek, E., & Kain, J. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417-458.

Robson, C. (2002). Real world research: A resource for social scientists and practitioner-researchers. Oxford, UK: Blackwell.

Rogers, C. (1982). A social psychology of schooling: The expectancy process. London: Routledge & Kegan Paul.

Rosenshine, B. (1976). Classroom instruction. In L. Gage (Ed.), The psychology of teaching methods (pp. 335-371). Chicago, IL: University of Chicago Press.

Rosenshine, B. (1979). Content, time and direct instruction. In L. Peterson & J. Walberg (Eds.), Research on teaching: Concepts, findings and implications (pp. 28-56). Berkeley, CA: McCutchan. Rosenshine, B. (1986). Synthesis of research on explicit teaching. Educational Leadership, 43, 60-69.

Rosenshine, B. (1987). Explicit teaching and teacher training. Journal of Teacher Education, 38, 34-36. doi: 10.1177/002248718703800308

Rosenshine, B. (1995). Advances in research on instruction. The Journal of educational research, 88, 262-268.

Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. Wittrock (Ed.), Handbook of research on teaching (pp. 376-391). New York: Macmillan.

Ross, J. (1994). The impact of an in-service to promote cooperative learning on the stability of teacher efficacy. Teaching and Teacher Education, 10, 381-394.

Rothkopf, E., & Billington, M. (1979). Goal-guided learning from text: inferring a descriptive processing model from inspection times and eye movements. Journal of Educational Psychology, 71, 310-327.

Rothstein, J. (2009). Student Sorting and Bias in Value Added Estimation: Selection on Observables and Unobservables. Education Finance and Policy, 4, 537–571.

Rotter, J. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs, 80, 1-28.

Rowe, K., & Hill, P. (1994). Multilevel modelling in school effectiveness research: how many levels?. Paper presented at the International Congress for School Effectiveness & Improvement, Melbourne, Australia, 3-6 January, 1994.

Sadler, D. (1989). Formative assessment and the design of instructional systems. Instructional science, 18, 119-144. doi: 10.1007/BF00117714

Sadler, D. (1998) Formative assessment: Revisiting the territory, Assessment in Education, 5, 77–84. doi:10.1080/0969595980050104

Saklofske, D., Michayluk, J., & Randhawa, B. (1988). Teachers' efficacy and teaching behaviors. Psychological Reports, 63, 407-414. doi:10.2466/pr0.1988.63.2.407

238

Sammons, P., DeLaMatre, J., & Mujtaba, T. (2002). A summary review of research on teacher effectiveness, Draft 2, 1-33.

Savage, T. (1972). A study of the relationship of classroom questions and social studies achievement of fifth-grade children (Doctoral dissertation). University of Washington.

Scheerens, J. (1992). Effective schooling: Research, theory and practice. London: Cassell.

Scheerens, J. (2014). School, teaching, and system effectiveness: Some comments on three state-of-the-art reviews. School effectiveness and School Improvement, 25, 282-290.doi:10.1080/09243453.2014.885453

Scheerens, J., & Bosker, R. (1997). The foundations of educational effectiveness. Oxford, UK: Pergamon.

Scheerens. J., & Creemers, B. (1989). Conceptualizing school effectiveness. International Journal of Educational Research, 13, 691–707.

Schmidt, R. (1991). Motor learning and performance: from principles to practice. Champaign, IL: Human Kinetics Books.

Schoenfeld, A. (1998). Toward a theory of teaching-in-context. Issues in Education, 4, 1–94.

Schon, D. (1987). Educating the reflective practitioner: Toward a design for teaching and learning in the professions. San Francisco, CA: Jossey-Bass.

Schumacker, R., & Lomax, R. (2004). A beginner's guide to structural equation modeling. Mahwah, New Jersey: Lawrence Erlbaum Associates

Schunk, D. (1991). Self-efficacy and academic motivation. Educational Psychologist, 26, 207 – 231.

Scriven, M. (1981). Summative teacher evaluation. In J. Millman (Ed.), Handbook of teacher evaluation (pp. 244-271). Beverly Hills, CA: Sage.

Secada ,W. (1992). Race, ethnicity, social class, language and achievement in mathematics. In A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 623-660). New York: Macmillan

Secondary Education Quality and Access Enhancement Project (SEQAEP). (2011). Operational manual for mathematics and English language additional class scheme. Dhaka, Bangladesh: Directorate of Secondary and Higher Education. Retrieved from http://www.seqaep.gov.bd/files/AC.pdf

Seidel, T., & Shavelson, R. (2007). Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77, 454-499. doi:10.3102/0034654307310317

Seldin, P. (1999). Current practices—Good and bad—nationally. In P. Seldin (Ed.),

239

Current practices in evaluating teaching: a practical guide to improved faculty performance and promotion/tenure decisions (pp. 1-24). Bolton, MA: Anker.

Shekh, M. (2005). The state of primary educational quality in Bangladesh: An evaluation. Paper presented in the Third NETREED Conference, Beitostolen, Norway.

Silverman, D. (1985). Qualitative methodology and sociology: Describing the social world. Brookfield, VT: Gower.

Silverman, S. (1990). Linear and curvilinear relationships between student practice and achievement in physical education. Teaching and Teacher Education, 6, 305- 314.doi:10.1016/0742-051X(90)90023-X

Silverman, S. (1993). Student characteristics, practice, and achievement in physical education. The Journal of Educational Research, 87, 54-61. doi:10.1080/00220671.1993.9941166

Simon, D. (2013). Spaced and Massed Practice. In J. Hattie & M. Anderman (Eds.), International guide to student achievement (pp. 411-413). New York: Routledge.

Simpson, E. (1966). The classification of educational objectives: psychomotor domain. Urbana, IL: University of Illinois.

Sinclair, B., & Fraser, B. (2002). Changing classroom environments in urban middle schools. Learning Environments Research, 5, 301-328. doi:10.1023/A:1021976307020

Skaalvik, M., & Skaalvik, S. (2007). Dimensions of teacher self-efficacy and relations with strain factors, perceived collective teacher efficacy, and teacher burnout. Journal of Educational Psychology, 99, 611–625. doi:10.1037/0022- 0663.99.3.611

Slater, R., & Teddlie, C. (1992). Toward a theory of school effectiveness and leadership. School Effectiveness and School Improvement, 3, 247–257. doi:10.1080/0924345920030402

Slavin R., & Cooper, R. (1999). Improving intergroup relations: Lessons learned from cooperative learning programs. Journal of Social Issues, 55, 647–663.

Slavin, R. (1983). Team-assisted individualization: A cooperative learning solution for adaptive instruction in mathematics (Report No. 340). Baltimore, MD: Centre for the Social Organization of Schools, Johns Hopkins University.

Smith, B. (2000). Quantity matters: Annual instructional time in an urban school system. Educational Administration Quarterly, 365, 652-682. doi:10.1177/00131610021969155

Smith, L. (1985). The effect of lesson structure and cognitive level of questions on student achievement. Journal of Experimental Education, 54, 44-49. doi:10.1080/00220973.1985.10806397

240

Smith, L., & Land, M. (1981). Low inference verbal behaviours related to teacher clarity, Journal of Classroom Interaction, 17, 37- 42.

Smith, L., & Sanders, K. (1981). The effects on student achievement and student perception of varying structure in social studies content. Journal of Educational Research, 74, 333-336. doi10.1080/00220671.1981.10885325

Smith, T., Baker, W., Hattie, J., & Bond, L. (2008). A validity study of the certification system of the national board for professional teaching standards. In L. Ingvarson & J. Hattie (Eds.), Assessing teachers for professional certification: The first decade of the National Board for professional teaching standards (pp. 345-378). Oxford, UK: Elsevier.

Soar, R., & Soar, R. (1979). Emotional climate and management. In P. Peterson & H. Walberg (Eds.), Research on teaching: Concepts, findings, and implications (pp. 97-119). Berkeley, CA: McCutchan.

Soodak, L., & Podell, D. (1996). Teacher efficacy: Toward the understanding of a multi-faceted construct. Teaching and Teacher Education, 12, 401-411. doi:10.1016/0742-051X(95)00047-N

Spatz, C. (2011). Basic statistics: Tales of distribution. Belmont, CA: Cengage Learning.

Stallings, J. & Kaskowitz, D. (1974), Follow-through classroom observation evaluation. Menlo Park, CA: Stanford Research Institute.

Stallings, J. (1985). Effective elementary classroom practices. In J. Kyle (Ed.), Reaching for excellence: An effective sourcebook (pp. 14-42). Washington, DC: Government Printing Office.

Stallings, J., Almy, M., Resnick, L., & Leinhardt, G. (1975). Implementation and child effects of teaching practices in follow through classrooms. Monographs of the Society for Research in Child Development, 40, 1-133. doi: 10.2307/1165828

Stein, M., & Wang, M. (1988). Teacher development and school improvement: The process of teacher change. Teaching and Teacher Education, 4, 171-187. doi:10.1016/0742-051x(88)90016-9

Stevenson, H., Chen, C., & Lee, S. (1993). Mathematics achievement of Chinese, Japanese and American children: Ten years later. Science, 259, 53–58.

Stigler, J., & Hiebert, J. (1999). The teaching gap: Best ideas from the world’s teachers for improving education in the classroom. New York: Free Press.

Stodolsky, S. (1984). Teacher evaluation: The limits of looking. Educational Researcher, 13, 11-18.

Strivens, J. (1985). School climate: A review of a problematic concept. In D. Reynolds (Ed.), Studying school effectiveness (pp. 45-57). London: Falmer Press

Stronge, J. (2002). Qualities of effective teachers. Alexandria, VA: Association for

241

Supervision and Curriculum Development.

Stronge, J. (2007). Qualities of effective teachers (2nd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Stronge, J., Ward, T., & Grant, L. (2011). What makes good teachers good? A cross- case analysis of the connection between teacher effectiveness and student achievement. Journal of Teacher Education, 62, 339-355. doi:10.1177/0022487111404241

Sweller, J., Kirschner, P., & Clark, R. (2007). Why minimally guided teaching techniques do not work: A reply to commentaries. Educational Psychologist, 42, 115-121. Teddlie, C. (1994). The integration of classroom and school process data in school effectiveness research. In D. Reynolds, B., Creemers, P., Nesselrodt, E.,Schaffer,S.,Stringfield,S. & C.Teddlie,(Eds.), Advances in School Effectiveness Research and Practice (pp. 111-132). Oxford: Elsevier Science Ltd.

Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed methods in the social and behavioural sciences. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social & behavioural research (pp. 3-50). Thousand Oaks, CA: Sage.

Thomas, S. & Smees, R. (200o). Dimensions of secondary school effectiveness: Comparative analyses across regions. Paper presented at American Educational Research Association, New Orleans.

Thomson, P. (1991). Competency-based training: Some development and assessment issues for policy makers. Leabrook, Australia: TAFE National Centre for Research and Development, ERIC: ED 333231.

Timperley, H. (2013). Feedback. In J. Hattie & M. Anderman (Eds.), International guide to student achievement (pp. 402-404). New York: Routledge.

Tittle, C. (1994). Toward an educational psychology of assessment for teaching and learning: Theories, contexts, and validation arguments. Educational Psychologist, 29, 149-162. doi:10.1207/s15326985ep2903_4

Topping, K., Samuels, J., & Paul, T. (2007). Does practice make perfect? Independent reading quantity, quality and student achievement. Learning and Instruction, 17, 253-264. doi: 10.1016/j.learninstruc.2007.02.002

Tracz, S. & Gibson, S. (1986). Effects of efficacy on academic achievement. Paper presented at the Annual Meeting of the California Educational Research Association, Marina del Rey, CA.

Travers, R. (1982). Essentials off learning: The new cognitive learning for students of education. New York: Macmillan

Tschannen-Moran, M., & Hoy, A. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783–805. doi:10.1016/S0742-

242

051X (01)00036-1

Tschannen-Moran, M., & Hoy, W. (2007). The differential antecedents of self-efficacy beliefs of novice and experienced teachers. Teaching and Teacher Education, 23, 944-956. doi: 10.1016/j.tate.2006.05.003

Tschannen-Moran, M., Hoy, A., & Hoy, W. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68, 202-248. doi: 10.3102/00346543068002202

Tuckman, B. (1972). Conducting educational research. New York: Harcourt Brace and Jovanovich.

Tuckman, B., & Sexton, T. (1990). The relationship between self-beliefs and self- regulated performance. Journal of Social Behaviour and Personality, 5, 465-472.

Tyler, J. (1972). A study of the relationship of two methods of question presentation, sex, and school location to the social studies achievement of second-grade children. Paper presented at the annual convention of the American Educational Research Association (AERA), Chicago, Illinois.

Uddin, M. (2007). Effectiveness of PTIs’ training program for primary mathematics in Bangladesh (Unpublished doctoral dissertation). Hiroshima University, Japan.

United Nations Children's Fund (UNICEF). (2009). Quality Primary Education in Bangladesh. Retrieved from http://www.unicef.org/bangladesh/Quality_Primary_Education(1).pdf

United Nations Educational, Scientific and Cultural Organization (UNESCO). (2007). Secondary education regional information base: country profile-Bangladesh. Bangkok: UNESCO. Retrieved from http://www.uis.unesco.org/Library/Documents/Bangladesh.pdf

United Nations Educational, Scientific and Cultural Organization (UNESCO). (2012). UNESCO Country programming Document for Bangladesh 2012-2016. Dhaka, Bangladesh: UNESCO country office.

Usunier, J. (1998). International and cross-cultural management research. London: Sage. van Lier, L. (1988). The classroom and the language learner. London: Longman

Vanlaar, G., Denies, K., Vandecandelaere, M., Van Damme, J., Verhaeghe, J. P., Pinxten, M., & De Fraine, B. (2014). How to improve reading comprehension in high-risk students: effects of class practices in Grade 5. School Effectiveness and School Improvement, 25, 408-432. doi:10.1080/09243453.2013.811088

Vanlaar, G., Kyriakides, L., Panayiotou, A., Vandecandelaere, M., McMahon, L., De Fraine, B., & Van Damme, J. (2016). Do the teacher and school factors of the dynamic model affect high-and low-achieving student groups to the same extent? a cross-country study. Research Papers in Education, 31, 183-211. doi:10.1080/02671522.2015.1027724

243

Venkatraman, N. (1989). The concept of fit in strategy research: Toward verbal and statistical correspondence. Academy of Management Review, 14, 423-444. doi:10.5465/AMR.1989.4279078

Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Walberg, H. (1982). What makes schooling effective? A synthesis and a critique of three national studies. Contemporary Education, 1, 23-24.

Walberg, H. (1986). Synthesis of research on teaching. In M. Wittrock (Ed.), Handbook of Research on Teaching (pp. 209-225). Dordrecht, the Netherlands: Kluwer.

Walberg, H. (1991). Educational productivity and talent development. In B. Fraser & H. Walberg (Eds.), Educational environments: Evaluation, antecedents and consequences (pp. 93-109). Oxford, UK: Pergamon Press.

Walberg, H. (Ed.). (1979). Educational environments and effects: Evaluation, policy, and productivity. Berkeley, CA: McCutchan Publishing Company.

Walsh, J., & Sattes, B. (2011). Thinking through quality questioning: Deepening student engagement. Corwin Press.

Wayne, A., & Youngs, P. (2003). Teacher characteristics and Student achievement gains: A review. Review of Educational Research. 73, 89-122. doi:10.3102/00346543073001089

Webster, W., & Mendro, R. (1997). The Dallas value-added accountability system. In J. Millman (Ed.), Grading teachers, grading schools, Is student achievement a valid evaluation measure? (pp. 81-99). Thousand Oaks, CA: Corwin Press.

Weil, M., & Murphy, J. (1982). Instructional processes. In H. Mitzel (Ed.), The encyclopaedia of educational research (pp. 890-917). New York: Macmillan.

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project. Retrieved from http://widgeteffect.org/downloads/TheWidgetEffect.pdf

Wentzel, K. (2009). Students’ relationships with teachers as motivation contexts. In K. Wentzel & A. Wigfield (Eds.), Handbook of motivation in school (pp. 301–322). Malwah, NJ: Erlbaum.

Whitty, G., & Willmott, E. (1991). Competence-based teacher education: approaches Wilen, W. (1987). Effective questions and questioning: A classroom application. In W. Wilen (Ed.), Questions, questioning techniques, and effective teaching, (pp. 107- 134). Washington, DC: National Education Association.

Wilen, W. (1991). Questioning skills, for teachers. What research says to the teacher. Washington, DC: National Education Association.

244

Wilen, W., & Clegg Jr, A. (1986). Effective questions and questioning: A research review. Theory & Research in Social Education, 14, 153-161. doi:10.1080/00933104.1986.10505518.

Wilkinson, S. (1981). The relationship of teacher praise and student achievement: A meta-analysis of selected research. Dissertation Abstracts International, 41(9–A), 3998.

Winne, P. (1 979). Experiments relating teachers' use of higher cognitive questions to student achievement. Review of Educational Research, 49, 13-49.

Wittrock, M. (1981). Reading comprehension. In F. Pirozzolo & M. Wittrock (Eds.), Neuropsychological and cognitive processes in reading (pp. 229-259). New York: Academic Press.

Wolters, C., & Daugherty, S. (2007). Goal structures and teachers' sense of efficacy: Their relation and association to teaching experience and academic level. Journal of Educational Psychology, 99, 181-193. doi:10.1037/0022-0663.99.1.181

Woolfolk, A., & Hoy, W. (1990). Prospective teachers' sense of efficacy and beliefs about control. Journal of Educational Psychology, 82, 81-91. doi:10.1037/0022- 0663.82.1.81

Woolfolk, A., Winne, P., & Perry, N. (2009). Social cognitive and constructivist views of learning. Educational Psychology, 329-370.

Worrell, F. & Kuterbach, L. (2001). The use of students’ ratings of teacher behaviour with academically talented high school students. Journal of Secondary Gifted Education.14, 236- 247.doi: 10.4219/jsge-2001-670

Wragg, E. (1993). Primary Teaching Skills. London: Routledge.

Wray, D., Medwell, J., Fox, R., & Poulson, L. (2000). The teaching practices of effective teachers of literacy. Educational Review, 52, 75-84. doi:10.1080/00131910097432

Wright, C., & Nuthall, G. (1970). Relationships between teacher behaviours and pupil achievement in three experimental elementary science lessons. American Educational Research Journal, 7, 477-491.

Wubbels, T., & Brekelmans, M. (1997). A comparison of student perceptions of Dutch physics teachers' interpersonal behaviour and their educational opinions in 1984 and 1993. Journal of Research in Science Teaching, 34, 447-466.

Wubbels, T., Brekelmans, M., & Hooymayers, H. (1991). Interpersonal teacher behaviour in the classroom. In B. Fraser & H. Walberg (Eds.), Educational environments: Evaluation, antecedents and consequences (pp.141-160). Oxford, UK: Pergamon Press.

Wubbels, T., Creton, H., & Hooymayers, H. (1985). Discipline problems of beginning teachers. Paper presented at annual meeting of American Educational Research Association, Chicago, Illinois.

245

Wubbels,T., Levy, J., & Brekelmans, M. (1997). Paying attention to relationships. Education Leadership, 54, 82-86.

Yates, G., & Yates, S. (1990). Teacher effectiveness research: Toward describing user friendly classroom instruction. Educational Psychology, 10, 225-238. doi:10.1080/0144341900100304

Zahn, G., Kagan, S., & Widaman, K. (1987). Cooperative learning and classroom climate. Journal of School Psychology, 24, 351-362. doi:10.1016/0022- 4405(86)90023-3

Zimmerman, B. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41, 64-70.doi:10.1207/s15430421tip4102_2

246

Appendix A: Permissions

247

Appendix A1: Request and permission to use the instruments from the authors.

Subject: RE: Request for the observation instruments From: Leonidas Kyriakides ([email protected]) To: [email protected]; [email protected]; Cc: [email protected] Date: Monday, 19 May 2014, 17:48

Dear Sheikh

Thanks for your email and interest. Please find attached the observation instruments you requested along with their guidelines. This appendix is part of our book “Creemers, B.P.M., &Kyriakides, L. (2012). Improving Quality in Education: Dynamic Approaches to School Improvement. London and New York: Routledge.”

Please let me know if you have any query. I am also happy to help you with the statistical analysis of your data and also I am also interested to know about the results of your study when it is completed.

Best wishes

Leonidas

248

Appendix A2: Request for the guidelines for analysing student questionnaire

Subject: RE: Request for the guidelines of student questionnaire

From: Leonidas Kyriakides ([email protected])

To: [email protected]

Date: Wednesday, 2 September 2015, 18:25

Dear Asad,

I apologize for not responding earlier to your message but I was abroad at several conferences during the last month and I had limited access to my email and documents.

Please find attached the specification table with the questionnaire items categorized per factor for the questionnaire that you have sent me in your message. There is no discrimination of the items based on the five dimensions in the table I am sending you since based on the results of the last studies it is difficult to have that categorization of the items per dimension.

For the factor concerned with the classroom as a learning environment you could try to have one factor for teacher – student and student- student interactions, but usually misbehavior is kept as a different factor. For the analyses my suggestion is to use Confirmatory Factor analysis and first develop a separate CFA model for each factor to see its fit to the data. Then you can try to have a second-order factor model with all the factors.

You can also find attached a paper that shows the analyses of the student questionnaire for an international project we had.

Please don’t hesitate to contact me in case you need anything further.

Best wishes, Leonidas Kyriakides

249

Attachment ‘Specification table: Items of the student questionnaire by factor’ contained in email from Leonidas Kyriakides (2 September, 2015).

Teacher Factors Questionnaire Items

Orientation 8, 51

Structuring 1, 2, 3, 4, 7, 10, 34

Application 11, 12, 14, 15

Management of Time 31, 22, 35, 36 Questioning 24, 25, 38, 39, 40, 41, 42, 43,52

Modeling 44, 45, 46, 47 Classroom as a learning Environment / 13, 16, 19, 20, 21, 26, 37 Teacher- Student Interaction Classroom as a learning Environment / 17, 18, 22, 23 Student- Student Interaction Classroom as a learning Environment / 27, 28, 29, 30, 33, Dealing with Misbehaviour Assessment 5, 6, 9, 48, 49, 50

Note. Items 49-52 are measured using a different scale and if one wants to use them in the analysis, Rasch analysis should be conducted.

250

Appendix A3: Phase one UNSW HREA Panel B approval

Human Research Ethics Advisory Panel B Arts, Humanities & Law

Date: 09.10.2013

Investigators: Mr Sheikh Asadullah

Supervisors: Dr Kerry Barnett, Dr Paul Ayres

School: School of Education

Re: Effective Mathematics Teaching in Secondary Schools in Bangladesh

Reference Number: 13 107

The Human Research Ethics Advisory Panel B for the Arts, Humanities &Lawis satisfied that this project is of minimal ethical impact and meets the requirements as set out in the National Statement on Ethical Conduct in Human Research*. Having taken into account the advice of the Panel, the Deputy Vice-Chancellor (Research) has approved the project to proceed.

Your Head of School/Unit/Centre will be informed of this decision.

This approval is valid for 12 months from the date stated above.

Yours sincerely

Associate Professor Anne Cossins Panel Convenor Human Research Ethics Advisory Panel B

Cc: Professor Chris Davison Head of School School of Education

* http://www.nhmrc.gov.au/

251

Appendix A4: Phase one Board of Intermediate and Secondary Education (BISE), Dhaka approval

252

Appendix A5: Phase two UNSW HREA Panel B approval

253

Appendix A6: Phase two request letter to Directorate of Secondary & Higher Education (DSHE), Dhaka.

Date: 03 November 2014

The Director General Directorate of Secondary and Higher Education (DSHE), Dhaka.

Dear Sir/Madam: With due honor, I would like state that I am a PhD candidate in the School of Education under the supervision of Dr Kerry Barnett and Professor Paul Ayres in the School of Education at University of New South Wales (UNSW), Australia.

As part of my PhD research, I plan to explore the teaching practices of effective mathematics in 20 of the high performing secondary schools in Dhaka Moahanogory, Bangladesh. In order to conduct the study, I wish to request your permission to email the principals’ in the schools listed in enclosure A with a request to participate in the second phase of my study. This would require principals to nominate effective mathematics teachers in the school who would be willing to participate in a videoed 40-minute lesson observation, a short 40-minute face to face interview and a questionnaire. In addition, it would require them to facilitate parental permission for students’ in year 9 and/or 10 of these teachers to complete a 30-minute questionnaire. Therefore, I request your permission to allow me to access the listed schools for the purpose of data collection for my research study.

I would like to assure you that measures will be taken to ensure to privacy and anonymity of school and the information of the teachers and students. These measures include using records only for the purpose they were made available for the study, maintaining confidentiality of records by assigning codes to each school teacher, and student and keeping the names and corresponding codes stored securely and separately from the school, teacher and student scores to be used in the analysis of the study, ensuring the secure storage of confidential records both during and after the study. I have attached a copy (enclosure B) of the approval from UNSW Human Research Ethics Advisory (HREA) panel B of my university. Should you have any questions or concerns regarding this research, please contact my supervisor- Dr Kerry Barnett at [email protected]

Yours sincerely,

------Sheikh Asadullah ID-3304792 PhD Candidate School of Education University of New South Wales, Sydney, Australia

Enclosure: A. List of 20 (twenty) secondary schools. B. Copy of the approval from Human Research Ethics Advisory (HREA) panel, UNSW.

254

Appendix A7: Phase two approval DSHE, Dhaka approval

255

Appendix A8: Phase two recruitment email/letter to principals

Dear Principal/Headmaster

I, Sheikh Asadullah, a Ph.D. candidate in the School of Education at UNSW, Australia am conducting a research study investigating teaching practices of effective secondary school mathematics teachers in Dhaka, Bangladesh. It is supervised by Dr Kerry Barnett who is a lecturer in the School of Education at UNSW. The purpose of the study is to learn about the teaching practices of effective mathematics and its relationship with mathematics achievement of the students in secondary schools. The findings will be written up in my doctoral dissertation to be submitted to the university.

Your school has been selected from the schools of Dhaka Mohanogory because the mathematics teachers and their year 9 or year 10 students will be able to provide important insight into mathematics teaching practices which are used in classrooms to help students to learn mathematics in secondary schools. I am writing to you for your permission to conduct my research at your school. This research study has received UNSW ethics approval (No:14 148) and, it also has the DHSE approval (No: IT/53TM/2011/1028a) to write to you and request your participation in this study.

Participation in this study will involve: 1. Observation of the classroom teaching practices (either in year 9 or year 10) of mathematics teacher(s). 2. Separate face-to-face interviews and completion of a questionnaire with the observed teacher which will take approximately 40 minutes. 3. Completion by students from the observed teacher’s class which will take approximately 30 minutes.

If you allow conducting my research at your school, I would like to request you to provide the email address of the mathematics teachers and their length of service in your school. Please note that, the participation of the teacher(s) is voluntary.

Concerning the consent of the students to participate, I would like to request that you distribute the respective Participant Information Statement and Consent forms (attached) to the students and their parents or guardians informing them of the study and having them sign and return these forms before the study commences. In the case of students and parents who do not give their consent for video recording the researcher will request that the teacher directs these students to sit in an area in the classroom which will be outside the frame of the camera.

Information collected will be handled confidentially, and no school or person will be mentioned by name. However, I will be happy to discuss findings with you at a later stage.

Thank you again for your consideration of my request.

Yours sincerely,

Sheikh Asadullah UNSW, Australia

256

Appendix A9: Phase two recruitment email/letter to teachers

Dear Teacher,

I, Sheikh Asadullah, a Ph.D. candidate in the School of Education at UNSW, Australia am conducting a research study investigating teaching practices of effective secondary school mathematics teachers in Dhaka, Bangladesh. It is supervised by Dr Kerry Barnett who is a lecturer in the School of Education at UNSW. The purpose of the study is to learn about the teaching practices of effective mathematics and its relationship with mathematics achievement of the students in secondary schools. The findings will be written up in my doctoral dissertation to be submitted to the university.

Your school has been selected from the schools of Dhaka Mohanogory and I am writing to you for your permission to participate in my research. This research study has received UNSW ethics approval(No:14148) and, it also has the DHSE approval (No: IT/53TM/2011/1028a) to write to you and request your participation in this study.

Your participation in this study will involve: 1. Observation and recording of your classroom teaching practices either in year 9 or year 10; 2. Face-to-face interviews with yourself and completion of a questionnaire which will take approximately 40 minutes.

Please note that your decision to participate in the research is completely voluntary and the decision of your non-participation will not be disclosed to any school authority.

Please reply to this email if you are willing to participate in the study. A copy of the interview questions and the questionnaire are attached herewith. I am hopeful that you will be prepared to participate. You will need to review the Participant Information Statement and sign the attached consent form and return to me before the interview and questionnaire.

Information collected will be handled confidentially, and no school or person will be mentioned by name. However, I will be happy to discuss findings with you at a later stage.

Thank you again for your consideration of my request.

Yours sincerely,

Sheikh Asadullah UNSW, Australia

257

Appendix A10: Participant information statement and consent form (teachers)

HREC Approval no.14.148 SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

Participant selection and purpose of the study

My name is Sheikh Asadullah, a doctoral student in the School of Education at UNSW, Australia am conducting a research study investigating teaching practices of effective secondary school mathematics teachers in Dhaka, Bangladesh. You are invited to participate in a study involving secondary schools’ teachers and students. I hope to learn about the teaching practices of effective mathematics teachers in secondary schools. You were selected as a possible participant in this study because your school has been identified as a high performing school in mathematics.

Description of study and risks

If you decide to participate, I will with your permission, observe you teaching mathematics one of your year 9 or year 10 classes, conduct a 40-minute face to face interview and survey with you about your teaching practices. The observations will be video recorded and the interviews will be recorded only with your permission for transcription and accuracy in analysis. Students’ as well as their parents have also been requested to give their consent in regard to the student’s video recorded. In case of the non-consenting students, they will be requested in the located area of your classroom which is out of frame of the video camera.

The data will be securely stored within the School of Education, UNSW for seven years and then will be destroyed. The data will de-identified by assigning codes and no schools or individuals will be identified. Only the researcher, Sheikh Asadullah and his supervisor Dr Kerry Barnett will have access to this information.

I do not expect that you will experience any discomfort or risk as a result of your participation in this study.

I cannot and do not guarantee or promise that you will receive any benefits from this study.

258

SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM (continued)

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

Confidentiality and disclosure of information

Any information that is obtained in connection with this study and that can be identified with you will remain confidential and will be disclosed only with your permission or except as required by law. If you give your permission by signing this document, I plan to discuss the results in my thesis and publish some of them in reputable educational journals. In any publication, information will be provided in such a way that you cannot be identified.

Complaints

Complaints may be directed to the Ethics Secretariat, The University of New South Wales, SYDNEY 2052 AUSTRALIA (phone (02) 9385 4234, fax (02) 9385 6222, email [email protected]. Any complaint you make will be investigated promptly and you will be informed out the outcome.

Feedback to participants

A summary of the research finding will be emailed to your school at the completion of the study.

Your consent

Your decision whether or not to participate will not prejudice your future relations with The University of New South Wales. If you decide to participate, you are free to withdraw your consent and to discontinue participation at any time without prejudice.

If you have any questions, please feel free to ask us. If you have any additional questions later, Sheikh Asadullah (ph 0416611822) will be happy to answer them.

You will be given a copy of this form to keep.

259

SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM (continued)

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

Declaration by Participant

I have read the Participant Information Sheet or someone has read it to me in a language that I understand.

I understand the purposes, procedures and risks of the research described in the project.

I have had an opportunity to ask questions and I am satisfied with the answers I have received.

I freely agree to participate in this research project as described and understand that I am free to withdraw at any time during the project without affecting my future care.

I understand that I will be given a signed copy of this document to keep.

------Signature of the Participant Signature of the Witness

------(Please PRINT name) (Please PRINT name)

------Date Nature of Witness

260

SCHOOL OF EDUCATION

REVOCATION OF CONSENT

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Investigator: Sheikh Asadullah

I hereby with to WITHDRAW my consent to participate in the research proposal described above and understand that such withdrawal WILL NOT jeopardize any treatment or my relationship with The University of New South Wales, Sydney, NSW 2052, Australia.

------Signature Date

------(Please PRINT name)

The section for Revocation of Consent should be forwarded to Sheikh Asadullah, c/o Dr Kerry Barnett, Room no. 106, Goodsell Building, School of Education, The University of New South Wales, Sydney, NSW 2052, Australia, email. [email protected]

261

Appendix A11: Participant information statement and consent form (parent/guardian).

HREC Approval no.14.148 SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

Participant selection and purpose of the study

My name is Sheikh Asadullah, a doctoral student in the School of Education at UNSW, Australia am conducting a research study investigating teaching practices of effective secondary school mathematics teachers in Dhaka, Bangladesh. The student is invited to participate in a study involving the teaching practices of mathematics teachers in secondary schools’ in Bangladesh. I hope to learn about teaching practices of effective mathematics teachers in secondary schools. The student was selected as a possible participant in this study because he/she is a student in a school which has agreed to participate in this study.

Description of study and risks

If you allow the student to participate, I will conduct classroom observations of his/her mathematics class and ask him/her to complete a questionnaire which will take approximately 30 minutes and will explore issues related to the study. The classroom observations will be video recorded with your permission to allow for accurate transcription and analysis. The data collected will be securely stored within the School of Education, UNSW for seven years and then will be destroyed. The data collected will be coded so that schools and individuals will not be identified on them. Only the researcher, Sheikh Asadullah and his supervisor Dr Kerry Barnett will have access to this information. If you wish the student not video recorded, the student will be located to the area of the classroom which is out of frame of the video camera.

I do not expect that the student will experience any discomfort or risk as a result of his/her participation in this study.

I cannot and do not guarantee or promise that the student will receive any benefits from this study.

262

SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM (continued)

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh.Chief Investigator: Sheikh Asadullah

Confidentiality and disclosure of information

Any information that is obtained in connection with this study and that can be identified with the student will remain confidential and will be disclosed only with your permission or except as required by law. If you give your permission by signing this document, I plan to publish the results in my thesis and in reputable educational journals. In any publication, information will be provided in such a way that the student cannot be identified.

Complaints may be directed to the Ethics Secretariat, The University of New South Wales, Sydney 2052 Australia (phone 9385 4234, fax 9385 6648, email [email protected]).

Complaints

Complaints may be directed to the Ethics Secretariat, The University of New South Wales, SYDNEY 2052 AUSTRALIA (phone (02) 9385 4234, fax (02) 9385 6222, email [email protected]. Any complaint you make will be investigated promptly and you will be informed out the outcome

Feedback to participants

A summary of the research finding will be emailed to the student’s school at the completion of the study.

Your consent

Your decision whether or not to allowing the student to participate will not prejudice your or the student’s future relations with The University of New South Wales. If you allow the student to participate, you are free to withdraw your consent and to discontinue his/her participation at any time without prejudice.

If you have any questions, please feel free to ask us. If you have any additional questions later, Sheikh Asadullah (ph 0416611822) will be happy to answer them.

You will be given a copy of this form to keep.

263

SCHOOL OF EDUCATION

THE UNIVERSITY OF NEW SOUTH WALES

PARTICIPANT INFORMATION STATEMENT AND CONSENT FORM (continued)

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

Declaration by Participant

I have read the Participant Information Sheet or someone has read it to me in a language that I understand.

I understand the purposes, procedures and risks of the research described in the project.

I have had an opportunity to ask questions and I am satisfied with the answers I have received.

I freely agree to allow the student to participate in this research project as described and understand that I am free to withdraw the student’s participation at any time during the project without affecting the student’s future care.

I understand that I will be given a signed copy of this document to keep.

------Signature of the Parent/Guardian Signature of the Witness

------(Please PRINT name) (Please PRINT name)

------Date Nature of Witness

------Print the name of the Student

------Signature of the investigator(s)

------(Please PRINT name)

264

SCHOOL OF EDUCATION

REVOCATION OF CONSENT

An Investigation of the Teaching Practices of Effective Mathematics Teachers in Secondary Schools in Dhaka, Bangladesh. Chief Investigator: Sheikh Asadullah

I hereby with to WITHDRAW my consent to the student’s participation in the research proposal described above and understand that such withdrawal WILL NOT jeopardize any treatment or my/the student’s relationship with The University of New South Wales, Sydney, NSW 2052, Australia.

------Signature Date

------(Please PRINT name) Print the name of the Student

The section for Revocation of Consent should be forwarded to Sheikh Asadullah, c/o Dr Kerry Barnett, Room no. 106, Goodsell Building, School of Education, The University of New South Wales, Sydney, NSW 2052, Australia, email. [email protected]

265

Appendix B: Instruments

266

Appendix B1: Teacher questionnaire

Part-1 Instructions: This part of the questionnaire includes demographic questions about you. Please complete the questions in the space below. Your responses will be kept strictly confidential and you will not be identified in this study.

What is your age?

Are you a male or a female?

What is your highest degree or the level of school you have completed?

Did you complete the Bachelor of Education program? If so, in what year?

How many years have you been teaching mathematics at secondary level?

How many years have you been teaching mathematics in this school?

Part-2

Instructions: The statements below refer specifically to TEACHING MATHEMATICS in the CLASSROOM. There are no right or wrong answers. Please mark the percentage (place a tick () in the box) that best represents how confident you are that you can carry out each activity. For example, if you are completely confident that you can do it, mark 100%. If you have no confidence that you can carry out the activity, mark 0%. If your confidence lies somewhere in between, then please mark the percentage that most closely matches your confidence.

Note: Items and scale on next page.

267

I am confident that I can: Percent (%) 0 10 20 30 40 50 60 70 80 90 100 1 Orientate the objective of the lesson or task clearly so that the lesson or task is meaningful to students. 2 Outline the key points of the lesson or task and relate them to previous lessons or tasks. 3 Control disruptive behaviour in the classroom, even the most aggressive or noisy students.

4 Get students to follow classroom rules. 5 Calm a student who is disruptive or noisy. 6 Keep a few problem students from ruining an entire lesson. 7 Respond to defiant students. 8 Make my expectations clear about student behaviour. 9 Establish routines to keep activities running smoothly. 10 Encourage positive aspects of competition and collaboration among students. 11 Get students to believe they can do well in the task. 12 Motivate students who are not interested or less interested in the task 13 Improve the understanding of a student who is failing. 14 Get through to the most difficult students. 15 Provide students with practice opportunities to master the task or lesson. 16 Help students think critically. 17 Foster student creativity. 18 Provide opportunities/challenges to capable students to use or develop alternative strategy to solve the problem or types of problems. 19 Craft good questions for all students. 20 Provide an alternative explanation or appropriate clues for confused students. 21 Respond to difficult questions from students.

22 Use a variety of assessment strategies to evaluate student’s learning outcomes. 23 Gauge student comprehension of what has been taught. 24 Spend the classroom teaching time mostly on learning activities. 25 Engage majority of students in the learning activities or on task provided by me during the lesson.

268

Appendix B2: Development of teacher self-efficacy scale

Item Teaching Statement TSES New or Number behaviour Item modified 1 Orientation Orientate the objective of the lesson or task clearly so New that the lesson or task is meaningful to students. 2 Structuring Outline the key points of the lesson or task and relate New them to previous lessons or tasks. 3 CE(MB) Control disruptive behaviour in the classroom, even the 1 Modified most aggressive or noisy students. 4 CE(MB) Get students to follow classroom rules. 13 Modified

5 CE(MB) Calm a student who is disruptive or noisy. 7 Modified

6 CE(MB) Keep a few problem students from ruining an entire 19 Modified lesson. 7 CE(MB) Respond to defiant students. 21 Modified 8 CE(MB) Make my expectations clear about student behaviour. 5

9 CE(MB) Establish routines to keep activities running smoothly. 8 Modified 10 CE(CCG) Encourage positive aspects of competition and New collaboration among students. 11 CE(I) Get students to believe they can do well in the task. 6 Modified

12 CE(I) Motivate students who are not interested or less 4 Modified interested in the task. 13 CE(I) Improve the understanding of a student who is failing. 14 Modified

14 CE(I) Get through to the most difficult students. 1 Modified

15 Practice Provide students with practice opportunities to master the New task or lesson. 16 Modelling Help students think critically. 2 Modified

17 Modelling Foster student creativity. 5 Modified

18 Modelling Provide opportunities/challenges to capable students to 20, 24 Modified use or develop alternative strategy to solve the problem or types of problems. 19 Questioning Craft good questions for all students. 11 Modified

20 Questioning Provide an alternative explanation or appropriate clues 20 Modified for confused students. 21 Questioning Respond to difficult questions from students. 7 Modified

22 Assessment Use a variety of assessment strategies to evaluate 18 Modified student’s learning outcomes. 23 Assessment Gauge student comprehension of what has been taught. 10 Modified

24 Time Spend the classroom teaching time mostly on learning New activities. 25 Time Engage majority of students in the learning activities or New on task provided by me during the lesson.

269

Appendix B3: Translated version (in Bangla) of teacher questionnaire

270

271

272

Appendix B4: Teacher interview schedule

Questions related to orientation What was the goal of your lesson today? Do you think your students understood or could identify the goal of the today’s lesson? How do you know? Can you describe what you did to make sure your students understood the goal of this lesson?

Questions related to structuring Did the lesson have a structure or sequence to it? Can you describe how you organized today’s lesson? Do you think your students understood the structure of the lesson? For example, would they be able to identify the lesson sequence (what preceded and what followed) in each of the activities or the links between the activities? How do you know this?

Questions related to modelling Did you introduce students to something new today? Can you describe how you did this (e.g., what activities were involved to ensure students were able to learn something new today? Do you think your students learned something new today? If so, what was it? How do you know they did?

Questions related to practice Were students able to engage in practice activities? If so, why were these provided? Can you give an example of one of these activities in today’s lesson?

Questions related to questioning Have you considered the difficulty level of the questions? How? Do you think the questions to students were clear or understandable to them? Why?

273

Appendix B5: Translated version (in Bangla) of teacher interview schedule

274

Appendix C: Phase one data

275

Table C1. School ranking based on value added score for schools in Dhaka metropolitan area (n = 380) School Value School Value School Value School Value rank added rank added rank added rank added score score score score 1 1.91 49 0.86 97 0.60 145 0.42 2 1.60 50 0.85 98 0.60 146 0.42 3 1.58 51 0.84 99 0.60 147 0.41 4 1.55 52 0.83 100 0.60 148 0.41 5 1.45 53 0.82 101 0.59 149 0.41 6 1.31 54 0.81 102 0.59 150 0.41 7 1.29 55 0.81 103 0.58 151 0.41 8 1.27 56 0.80 104 0.58 152 0.40 9 1.22 57 0.80 105 0.58 153 0.40 10 1.18 58 0.80 106 0.58 154 0.39 11 1.14 59 0.78 107 0.57 155 0.39 12 1.13 60 0.78 108 0.56 156 0.39 13 1.13 61 0.77 109 0.56 157 0.38 14 1.12 62 0.77 110 0.56 158 0.38 15 1.12 63 0.76 111 0.55 159 0.38 16 1.12 64 0.76 112 0.55 160 0.38 17 1.09 65 0.75 113 0.55 161 0.38 18 1.08 66 0.75 114 0.55 162 0.37 19 1.08 67 0.75 115 0.55 163 0.37 20 1.07 68 0.75 116 0.54 164 0.37 21 1.07 69 0.73 117 0.53 165 0.37 22 1.05 70 0.72 118 0.53 166 0.36 23 1.05 71 0.72 119 0.52 167 0.36 24 1.01 72 0.72 120 0.52 168 0.36 25 1.01 73 0.71 121 0.52 169 0.35 26 0.99 74 0.71 122 0.51 170 0.35 27 0.98 75 0.71 123 0.51 171 0.35 28 0.97 76 0.68 124 0.50 172 0.34 29 0.97 77 0.68 125 0.49 173 0.33 30 0.96 78 0.68 126 0.49 174 0.33 31 0.95 79 0.67 127 0.48 175 0.33 32 0.95 80 0.67 128 0.48 176 0.33 33 0.93 81 0.67 129 0.48 177 0.33 34 0.93 82 0.67 130 0.47 178 0.32 35 0.93 83 0.66 131 0.47 179 0.32 36 0.92 84 0.66 132 0.47 180 0.32 37 0.91 85 0.65 133 0.46 181 0.32 38 0.91 86 0.65 134 0.46 182 0.32 39 0.90 87 0.65 135 0.46 183 0.32 40 0.89 88 0.64 136 0.46 184 0.32 41 0.89 89 0.63 137 0.45 185 0.31 42 0.89 90 0.62 138 0.45 186 0.30 43 0.88 91 0.62 139 0.45 187 0.30 44 0.87 92 0.62 140 0.44 188 0.30 45 0.87 93 0.61 141 0.44 189 0.30 46 0.86 94 0.61 142 0.43 190 0.30 47 0.86 95 0.61 143 0.43 191 0.30 48 0.86 96 0.60 144 0.43 192 0.30

276

Table C1. continued

School Value School Value School Value School Value rank added rank added rank added rank added score score score score 193 0.29 241 0.15 289 -0.17 337 -0.47 194 0.29 242 0.15 290 -0.18 338 -0.47 195 0.29 243 0.14 291 -0.20 339 -0.47 196 0.28 244 0.14 292 -0.20 340 -0.47 197 0.28 245 0.14 293 -0.21 341 -0.47 198 0.28 246 0.13 294 -0.21 342 -0.49 199 0.28 247 0.13 295 -0.22 343 -0.50 200 0.28 248 0.12 296 -0.22 344 -0.52 201 0.27 249 0.12 297 -0.22 345 -0.52 202 0.27 250 0.12 298 -0.23 346 -0.53 203 0.27 251 0.12 299 -0.24 347 -0.54 204 0.26 252 0.11 300 -0.25 348 -0.56 205 0.26 253 0.10 301 -0.25 349 -0.57 206 0.26 254 0.10 302 -0.26 350 -0.57 207 0.26 255 0.08 303 -0.27 351 -0.58 208 0.26 256 0.08 304 -0.27 352 -0.59 209 0.25 257 0.08 305 -0.29 353 -0.62 210 0.25 258 0.08 306 -0.29 354 -0.63 211 0.25 259 0.07 307 -0.30 355 -0.63 212 0.24 260 0.07 308 -0.30 356 -0.65 213 0.24 261 0.06 309 -0.31 357 -0.65 214 0.23 262 0.06 310 -0.31 358 -0.66 215 0.23 263 0.05 311 -0.31 359 -0.66 216 0.23 264 0.02 312 -0.32 360 -0.67 217 0.23 265 0.02 313 -0.32 361 -0.68 218 0.22 266 0.00 314 -0.33 362 -0.69 219 0.22 267 -0.02 315 -0.34 363 -0.69 220 0.22 268 -0.02 316 -0.34 364 -0.71 221 0.22 269 -0.04 317 -0.34 365 -0.74 222 0.21 270 -0.05 318 -0.35 366 -0.74 223 0.21 271 -0.05 319 -0.35 367 -0.74 224 0.21 272 -0.05 320 -0.36 368 -0.76 225 0.20 273 -0.07 321 -0.36 369 -0.76 226 0.20 274 -0.08 322 -0.36 370 -0.77 227 0.19 275 -0.10 323 -0.37 371 -0.79 228 0.19 276 -0.11 324 -0.38 372 -0.82 229 0.19 277 -0.11 325 -0.39 373 -0.83 230 0.19 278 -0.11 326 -0.40 374 -0.88 231 0.19 279 -0.12 327 -0.40 375 -0.96 232 0.18 280 -0.12 328 -0.42 376 -1.06 233 0.18 281 -0.12 329 -0.43 377 -1.12 234 0.18 282 -0.12 330 -0.43 378 -1.14 235 0.18 283 -0.13 331 -0.43 379 -1.45 236 0.17 284 -0.14 332 -0.44 380 -2.58 237 0.17 285 -0.14 333 -0.45 238 0.16 286 -0.15 334 -0.45 239 0.16 287 -0.16 335 -0.46 240 0.15 288 -0.16 336 -0.46 Notes. VAA refers to value added score. The name of the schools is omitted due to confidentiality. The top twenty high performing secondary schools in mathematics are ranked 1-20 and bolded.

277

Figure C1: Frequency distribution of value-added scores for 380 secondary schools

278

Appendix D: Phase two data

279

Appendix D1: Teacher characteristics

Table D1.1 Summary of frequency distributions of teacher demographic characteristics

Characteristic Category Frequency Percent

Sex Male 13 87

Female 2 13

Age (years) 35-39 3 20

40-44 3 20

45-49 4 27

50-54 5 33

Experience (years) 9-15 4 27

16-22 8 53

23-27 3 20

Qualifications Bachelor degree 4 27

Bachelor (Hons) 2 13

Master’s degree 9 60

Total 15 100

280

Table D1.2 Teacher self-efficacy ratings for teaching behaviours (percent) School Teacher Orient Struct Model Practice Quest CE CE CE (I) (CCG) (MB) 1 Shapla 90 80 83 70 80 88 95 97

2 Moni 100 90 90 90 73 75 90 97

2 Nila 80 80 67 100 40 58 80 67

2 Adnan 100 90 80 90 87 55 90 90

3 Saddam 100 100 90 100 57 73 85 86

3 Angel 100 100 83 100 50 68 83 83

3 Momota 100 90 80 100 43 63 90 91

4 Bela 100 90 83 100 93 80 80 94

6 Hazzaz 90 100 100 100 93 95 90 93

6 Antu 100 90 83 80 67 73 90 70

7 Bindu 100 100 93 90 70 83 100 97

12 Babu 100 100 97 100 90 93 95 94

15 Priya 100 100 97 90 93 95 100 97

15 Kazol 100 90 97 100 93 88 90 91

15 Shilpo 80 90 100 100 87 73 90 94

Notes. Orient refers to orientation, struct refers to structuring, ques. refers to questioning, CE (I) refers to classroom environment (teacher-student interaction), CE(CCG) refers to classroom environment (competition, cooperation, groupwork), CE(MB) refers to classroom environment(misbehaviour)

281

Appendix D2: Mathematical content of lessons in teacher observations

Content Topic Teacher Number (n=15)

Algebra Equation Moni 8 Set Antu, Babu, Shapla

Index Bindu, Bela Ratio & Proportion Adnan

Sequence & Series Kazol Geometry Pythagoras proposition Saddam 3 Rectangular & square Momota

Cylinder Priya Trigonometry Height and distance Angel 1 Statistics Mean, Median, Mode Nila, Hazzaz, Shilpo 3

282

Appendix D3: Codes and descriptors for qualitative characteristics of teaching behaviours

Table D3.1

Coding sequences for stage dimension of orientation, structuring, modelling, practice and questioning behaviours Teaching Behaviour Descriptor Code

Orientation Introduction 1

Structuring Core 2

Modelling End 3

Practice More than one stage 4

Questioning

Table D3.2

Coding sequences for focus dimension of orientation, structuring, modelling, practice and questioning behaviours Teaching behaviour Descriptor Code

Orientation Task related 1

Questioning Lesson related 2

Practice Unit related 3

Task/lesson/unit related 4

Structuring Previous lesson related 1

Present lesson related 2

Unit related 3

All 4

Modelling Used in lesson only 1

Used in unit 2

Used across units 3

283

Table D3.3

Coding sequence for quality dimension of orientation, structuring, modelling, practice and questioning behaviours Teaching Specific Descriptor Code behaviour behaviour Orientation Typical 1 Related to learning 2 Students able to specify aims 3 Related to learning/students able to 4 specify aims Structuring Clear 1 Unclear 2 Mixed 3 Practice Similar 1 Complex 2 Mixed 3 Modelling Teacher role Gave strategy to students 1 Guided students to discover strategy 2 Directed students to discover strategy 3 Mixed 4 Lesson phase Model after problem 1 Model before problem 2 Mixed 3 Appropriate Successful 1

Unsuccessful 2

Partly successful 3 Questioning Question type Product 1 Process 2 Both 3 Teacher Restate easier words 1 response, no Pose easier questions 2 answer Move to next student 3 Move to next question 4 Answer self 5 Mixed 6 Teacher No comment 1 response, to Comment on correct/incorrect/partly 2 answer correct Invite student comment 3 Comment & invite student comment 4 Teacher Negative comment on incorrect/part response, correct answer 1 to student Positive comment to correct answer & constructive comment on incorrect/part 2 correct answer No comment 3 Mixed of negative/positive/constructive 4 comment

284

Table D3.4

Coding sequences for stage dimension of classroom environment Teaching behaviour Descriptor Code CE(I) Interaction in introductory stage 1 CE(CCG) Interaction in core stage 2 CE (MB) Interaction at end stage 3 No interaction Interaction in multiple stages 4 Interruption

Table D3.5

Coding sequences for focus dimension of classroom environment Teaching behaviour Descriptor Code CE(I) None 1 CE(CCG) Interaction to facilitate learning 2 CE (MB) Interaction for management 3 No interaction Interaction to create friendly CE 4 Interruption Interaction for above reasons 5 CE(MB) Incidental 1

Continuous 2

Table D3.6

Coding sequences for quality dimension of classroom environment Teaching Behaviour Descriptor Code CE(I) Interaction to ensure students on-task 1 CE(CCG) Interaction did not ensure students on- 2 task No interaction Interaction has mixed student effect 3

Interruption CE(MB) Ignores deliberately 1 Disorder not resolved 2 Resolves disorder 3 Notes. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment (misbehaviour).

285

Table D3.7

Coding sequences for differentiation dimension of all teaching behaviours Teaching Behaviour Descriptor Code

Orientation Differentiation of activities 1 Structuring Modelling Practice Questioning CE(I) CE(CCG) CE (MB) No interaction Notes. CE(I) refers to classroom environment (teacher-student interaction), CE(CCG) refers to classroom environment (competition, cooperation, group work), CE(MB) refers to classroom environment (misbehaviour).

286

Appendix D4: Qualitative characteristics of classroom as a learning environment (other than teacher initiated)

Table D4.1

Stage, focus, quality dimensions of classroom as a learning environment (student initiated) Teacher CE(I) Student initiated Give answer Ask question or Take initiative Collaborate - student Collaborate - teacher Work on own request help initiated initiated St. Focus Qual. St. Focus Qual. St. Focus Qual. St. Focus Qual. St. Focus Qual. St. Focus. Qual. Shapla 4 2 1 0 0 0 1 2 1 2 2 1 0 0 0 4 2 1 Moni 4 2 1 0 0 0 4 2 1 0 0 0 0 0 0 2 2 1 Nila 4 2 1 2 2 1 1 2 1 2 2 1 4 5 3 0 0 0 Adnan 3 2 1 2 2 3 1 2 1 4 2 1 0 0 0 4 2 3 Saddam 4 2 1 0 0 0 0 0 0 0 0 0 3 2 2 4 2 1 Angel 1 2 1 2 2 1 0 0 0 3 2 1 2 2 1 4 2 3 Momota 4 2 1 2 2 1 0 0 0 0 0 0 1 2 3 4 2 3 Bela 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 2 3 Hazzaz 4 2 1 2 2 1 0 0 0 3 2 1 0 0 0 4 2 1 Antu 3 2 1 0 0 0 0 0 0 0 0 0 4 2 1 4 2 1 Bindu 1 2 1 0 0 0 0 0 0 4 2 3 0 0 0 4 2 1 Babu 1 2 1 2 2 1 0 0 0 4 2 3 1 2 2 2 2 3 Priya 4 2 1 0 0 0 0 0 0 0 0 0 4 2 1 3 2 1 Kazol 4 2 1 0 0 0 4 2 1 3 2 1 0 0 0 4 2 1 Shilpo 2 2 1 4 2 1 0 0 0 4 2 3 0 0 0 3 2 3 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. St. refers to stage, Qual. refers to quality,

287

Table D4.2

Stage, focus, quality dimensions of classroom as a learning environment (competition, cooperation, group work) and (misbehaviour) Teacher Classroom Environment Classroom Environment (misbehaviour) (competition, cooperation, group work) Student misbehaviour Teacher deals with misbehaviour Stage Focus Quality Stage Focus Quality Stage Focus Quality Shapla 2 2 1 0 0 0 0 0 0 Moni 0 0 0 0 0 0 0 0 0 Nila 2 2 2 0 0 0 0 0 0 Adnan 2 2 3 0 0 0 0 0 0 Saddam 0 0 0 0 0 0 0 0 0 Angel 0 0 0 0 0 0 0 0 0 Momota 0 0 0 4 1 n/a 4 2 1 Bela 0 0 0 0 0 0 0 0 0 Hazzaz 0 0 0 0 0 0 0 0 0 Antu 2 2 1 0 0 0 0 0 0 Bindu 1 2 2 0 0 0 0 0 0 Babu 2 2 2 0 0 0 0 0 0 Priya 4 2 2 0 0 0 0 0 0 Kazol 0 0 0 0 0 0 0 0 0 Shilpo 2 2 2 0 0 0 0 0 0 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed.

288

Table D4.3

Stage, focus, quality dimensions of classroom as a learning environment that is neither initiated by the teacher or the student Teacher No interaction Interruption Organisation Other activities Monitor External Silence Stage. Focus Qual. Stage Focus Qual. Stage Focus Qual. Stage Focus Qual. Stage Focus Qual.

Shapla 4 2 3 0 0 0 4 2 1 4 1 2 0 0 0

Moni 4 2 3 3 1 1 3 2 1 4 1 2 0 0 0

Nila 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Adnan 4 2 1 4 1 1 4 2 1 0 0 0 0 0 0

Saddam 4 2 2 2 1 1 4 2 1 0 0 0 3 1 2

Angel 4 2 3 2 1 1 2 2 1 0 0 0 4 5 2

Momota 1 2 3 2 1 1 4 2 1 0 0 0 0 0 0

Bela 4 2 3 4 1 1 4 2 1 0 0 0 0 0 0

Hazzaz 4 2 3 0 0 0 3 2 1 4 1 2 0 0 0

Antu 0 0 0 0 0 0 2 2 3 0 0 0 0 0 0

Bindu 4 2 3 4 1 1 4 2 1 0 0 0 0 0 0

Babu 0 0 0 2 1 1 2 2 1 0 0 0 0 0 0

Priya 1 2 2 0 0 0 4 2 1 0 0 0 0 0 0

Kazol 0 0 0 3 1 1 4 2 1 0 0 0 0 0 0

Shilpo 4 2 3 4 1 1 4 2 3 1 1 2 0 0 0 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. Qual. refers to quality.

289

Appendix D5: Frequency distributions for quantitative and qualitative characteristics of teaching behaviours

Table D5.1

Frequency distribution of number of teaching tasks

Teaching behaviour Interval Frequency Percent Cumulative (counts) percent Orientation 1-2 6 40 40 3-4 7 47 87 9-10 2 13 100 Total 15 100 Structuring 0 1 7 7 1-2 9 60 67 3-4 4 26 93 7-8 1 7 100 Total 15 100 Modelling 1-2 3 20 20 3-4 8 53 73 5-6 2 13 86 7-8 1 7 93 11-12 1 7 100 Total 15 100 Practice 0 1 7 7 1-2 10 67 84 3-4 4 26 100 Total 15 100 Questioning 0 6 40 40 1-2 1 7 47 3-4 2 13 60 5-6 3 20 80 7-8 2 13 93 17-18 1 7 100 Total 15 100 CE(I) 1-2 2 13 13 3-4 9 60 73 5-6 3 20 93 7-8 1 7 100 Total 15 100 CE(CCG) 0 7 47 47 1-2 7 47 94 7-8 1 6 100 Total 15 100 CE(MB) 0 14 93 93 1-2 1 7 100 Total 15 100 Notes. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, groupwork),CE (MB) refers to classroom environment(misbehaviour).

290

Table D5.2

Frequency distribution of duration of teaching tasks

Teaching behaviour Interval Frequency Percent Cumulative (minutes) percent Orientation 0-2 8 53 53 3-5 2 13 66 6-8 3 20 86 12-14 2 13 100 Total 15 100 Structuring 0-2 12 80 80 3-5 2 13 93 6-8 1 7 100 Total 15 100 Modelling 10-15 6 40 20 16-20 2 13 86 21-25 7 46 100 Total 15 100 Practice 0-3 3 20 20 4-7 4 27 47 8-11 4 27 74 12-15 3 20 94 20-23 1 7 100 Total 15 100 Questioning 0-1 10 67 67 2-3 2 13 80 4-5 3 20 100 Total 15 100 CE (I) 4-8 15 100 100 Total 15 100 CE(CCG) 0-1 13 87 87 2-3 2 13 100 Total 15 100 CE(MB) 0-2 14 93 93 6-8 1 7 100 Total 15 100 Notes. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, groupwork), CE (MB) refers to classroom environment(misbehaviour).

291

Table D5.3

Frequency distribution of stage dimension of orientation, structuring, modelling, practice and questioning behaviours Teaching behaviour Category Frequency Percent Cumulative percent Orientation 1 3 20 20 2 1 7 27 4 11 73 100 Total 15 100 Structuring 0 1 7 7 1 6 40 47 3 1 7 54 4 7 47 100 Total 15 100 Modelling 2 1 7 7 3 1 7 14 4 13 86 100 Total 15 100 Practice 0 1 7 7 2 2 13 20 3 9 60 80 4 3 20 100 Total 15 100 Questioning 0 6 40 40 1 1 7 47 3 1 7 54 4 7 47 100 Total 15 100 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed.

292

Table D5.4

Frequency distribution of focus dimensions of orientation, structuring, modelling, practice and questioning behaviours Teaching behaviour Category Frequency Percent Cumulative percent Orientation 1 1 7 7 2 6 40 47 4 8 53 100 Total 15 100 Structuring 0 1 7 7 1 2 13 20 2 5 33 53 3 1 7 60 4 6 40 100 Total 15 100 Modelling 2 10 67 67 3 5 33 100 Total 15 100 Practice 0 1 7 7 1 3 20 27 2 8 53 80 4 3 20 100 Total 15 100 Questioning 0 6 40 40 1 8 53 93 4 1 7 100 Total 15 100 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed.

293

Table D5.5

Frequency distribution of quality dimension of orientation, structuring, modelling, practice and questioning behaviours Teaching Sub-dimensions Category Frequency Percent Cumulative behaviour percent Orientation 1 1 7 7 2 5 33 40 3 1 7 47 4 8 53 100 Total 15 100 Structuring 0 1 7 7 1 12 80 87 3 2 13 100 Total 15 100 Modelling Teacher role 1 8 53 53 4 7 47 100 Total 15 100 Phase 1 6 40 40 2 4 27 67 3 5 33 100 Total 15 100 Appropriate 1 12 80 80 2 1 7 87 3 2 13 100 Total 15 100 Practice 0 1 7 7 1 7 47 54 2 6 40 94 3 1 7 100 Total 15 100 Questioning Question type 0 6 40 40 1 7 47 87 3 2 13 100 Total 15 100 Teacher response 3 2 50 50 no answer 5 1 25 75 6 1 25 100 Total 4 100 Teacher response 2 5 56 56 answer feedback 4 4 44 100 Total 9 100 Teacher response 2 3 33 33 student feedback 4 6 67 100 Total 9 100 Notes. See Appendix D.3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed.

294

Table D5.6

Frequency distribution of stage dimension of CE (I) teacher and student initiated, CE (CCG), CE (MB), no interaction and interruption Teaching behaviour Category Frequency Percent Cumulative percent CE(I) Teacher initiated Presentation 4 15 100 100 Total 15 100 Instruction 0 1 7 7 3 7 47 54 4 7 47 100 Total 15 100 Comment 0 3 20 20 1 3 20 40 3 1 7 47 4 8 53 100 Total 15 100 Pose problems and/or questions 2 1 7 7 4 14 93 100 Total 15 100 Organise 1 2 13 13 4 13 87 100 Total 15 100 Monitor 0 2 13 13 2 4 27 40 3 5 33 73 4 4 27 100 Total 15 100 Social interaction 1 1 7 7 3 1 7 14 4 13 87 100 Total 15 100 CE(I) Student initiated Give answer 0 1 7 7 1 3 20 27 2 1 7 34 3 2 14 48 4 8 53 100 Total 15 100 Ask content question 0 8 54 54 2 6 40 94 4 1 7 100 Total 15 100 Spontaneous speech 0 10 67 67 1 3 20 87 4 2 13 100 Total 15 100 Collaboration student initiated 0 6 40 40 2 2 13 53 3 3 20 73 4 4 27 100 Total 15 100

295

Table D5.6 continued Collaboration (teacher prompted) 0 8 53 53 1 2 13 66 2 1 7 73 3 1 7 80 4 3 20 100 Total 15 100 Works on own 0 1 7 7 2 2 13 20 3 2 13 33 4 10 67 100 Total 15 100 CE(CCG) Teacher encourages fair competition 0 7 47 47 /cooperation/group work 1 1 7 53 2 6 40 93 4 1 7 100 Total 15 100 CE(MB) Student misbehaviour 0 14 93 93 4 1 7 100 Total 15 100 0 14 93 93 Teacher deals with misbehaviour 4 1 7 100 Total 15 100 No interaction Organise learning material 0 4 27 27 1 2 13 40 4 9 60 100 Total 15 100 Other activities 0 5 33 33 2 4 27 60 3 2 13 73 4 4 27 100 Total 15 100 Monitor 0 1 7 7 2 3 20 27 3 2 13 40 4 9 60 100 Total 15 100 Interruption External 0 11 73 73 1 1 7 80 4 3 20 100 Total 15 100 Silence 0 13 87 87 3 1 7 94 4 1 7 100 Total 15 100 Notes. See Appendix D3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. CE(I) refers to classroom environment (teacher initiated and student initiated interaction), CE(CCG) refers to classroom environment (competition, cooperation, group work), CE(MB) refers to classroom environment(misbehaviour).

296

Table D5.7 Frequency distribution of focus dimension of CE (I) teacher and student initiated, CE (CCG), CE (MB), no interaction and interruption

Teaching behaviour Category Frequency Percent Cumulative percent CE(I) Teacher initiated Presentation 2 15 100 100 Total 15 100 Instruction 0 1 7 7 2 14 93 100 Total 15 100 Comment 0 3 20 20 2 12 80 100 Total 15 100 Pose problems and/or ques. 2 15 100 100 Total 15 100 Organise 2 14 93 93 5 1 7 100 Total 15 100 Monitor 0 2 13 13 2 11 73 86 5 2 13 100 Total 15 100 Social interaction 4 12 80 80 5 3 20 100 Total 15 100 CE(I) Student initiated Give answer 0 1 7 7 2 14 93 100 Total 15 100 Ask content question 0 8 53 53 2 7 47 100 Total 15 100 Spontaneous speech 0 10 67 67 2 5 33 100 Total 15 100 Collaboration student initiated 0 6 40 40 2 9 60 100 Total 15 100 Collaboration teacher prompted 0 8 53 53 2 6 40 93 5 1 7 100 Total 15 100 Works on own 0 1 7 7 2 14 93 100 Total 15 100 CE (CCG) Teacher encourages fair 0 7 47 47 competition /cooperation/group 2 8 53 100 work Total 15 100

297

Table D5.7 continued.

CE (MB) Student misbehaviour 0 14 93 93 1 1 7 100 Total 15 100 Teacher deals with misbehaviour 0 14 93 93 2 1 7 100 Total 15 100 No interaction Organise learning material 0 4 27 27 2 11 73 100 Total 15 100 Other activities 0 5 33 33 1 10 67 100 Total 15 100 Monitor 0 1 7 7 2 14 93 100 Total 15 100 Interruption External 0 11 73 73 1 4 27 100 Total 15 100 Silence 0 13 87 87 5 2 13 100 Total 15 100 Notes. See Appendix D3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. CE(I) refers to classroom environment (teacher initiated and student initiated interaction), CE(CCG) refers to classroom environment (competition, cooperation, group work), CE(MB) refers to classroom environment(misbehaviour).

298

Table D5.8

Frequency distribution of quality dimension of CE (I) teacher and student initiated, CE (CCG), CE (MB), no interaction, and interruption Teaching behaviour Category Frequency Percent Cumulative percent CE(I) Teacher initiated Presentation 1 14 93 93 3 1 7 100 Total 15 100 Instruction 0 1 7 7 1 13 87 94 3 1 7 100 Total 15 100 Comment 0 3 20 20 1 10 67 87 3 2 13 100 Total 15 100 Pose problems or questions 1 15 100 100 Total 15 100 Organise 1 7 47 100 3 8 53 100 Total 15 100 Monitor 0 2 13 13 1 6 40 53 3 7 47 100 Total 15 100 Social interaction 2 13 87 87 3 2 13 100 Total 15 100 CE(I) Student initiated Give answer 0 1 7 7 1 14 93 100 Total 15 100 Ask content question 0 8 53 53 1 6 40 93 3 1 7 100 Total 15 100 Spontaneous speech 0 10 67 67 1 5 33 100 Total 15 100 Collaboration student initiated 0 6 40 40 1 6 40 80 3 3 20 100 Total 15 100 Collaboration teacher prompted 0 8 53 53 1 3 20 73 2 2 13 87 3 2 13 100 Total 15 100 Works on own 0 1 7 7 1 8 53 60 3 6 40 100 Total 15 100

299

Table D5.8 continued.

CE(CCG) Teacher encourages fair competition 0 7 47 47 /cooperation/group work 1 2 13 60 2 5 33 93 3 1 7 100 Total 15 100 CE(MB) Teacher deals with misbehaviour 0 14 93 93 1 1 7 100 Total 15 100 No interaction Organise 0 4 27 27 1 1 7 34 2 2 13 46 3 8 53 100 Total 15 100 Other activities 0 5 33 33 1 10 67 100 Total 15 100 Monitor 0 1 7 7 1 12 80 87 3 2 13 100 Total 15 100 Interruption External 0 11 73 73 2 4 27 100 Total 15 100 Silence 0 13 87 87 2 2 13 100 Total 15 100 Note. See Appendix D3 for coding and category descriptors. A code of ‘0’ indicates the behaviour was not observed. CE (I) refers to classroom environment (teacher initiated and student initiated interaction), CE(CCG) refers to classroom environment (competition, cooperation, group work), CE(MB) refers to classroom environment(misbehaviour).

300

Appendix D6: Fisher’s exact test, phi and Cramer’s V and cell frequencies.

Table D6.1 Cell frequencies for school and the number of modelling tasks

Task number Modelling Total 1 2 School Seam Teachers 1 0 1 % within Seam 100% 0% 100% % of Total 7% 0% 7% Tasin Teachers 0 3 3 % within Tasin 0% 100% 100% % of Total 0% 20% 20% Nakib Teachers 3 0 3 % within Nakib 100% 0% 100% % of Total 20% 0% 20% Chadni Teachers 1 0 1 % within Chadni 100% 0% 100% % of Total 7% 0% 7% Hamida Teachers 2 0 2 % within Hamida 100% 0% 100% % of Total 13% 0% 13% Kanta Teachers 0 1 1 % within Kanta 0% 100% 100% % of Total 0% 7% 7% Mahbub Teachers 1 0 1 % within Mahbub 100% 0% 100% % of Total 7% 0% 7% Aruna Teachers 3 0 3 % within Aruna 100.0% 0.0% 100% % of Total 20% 0% 20% Total Teachers 11 4 15 % within all schools 73% 27% 100% Notes. For task number, 1 refers to ≤ 4.2 modelling tasks, 2 refers to > 4.2 modelling tasks.

301

Table D6.2 Cell frequencies for school and duration of structuring tasks Task duration Structuring Total 1 2 School Seam Teachers 1 0 1 % witzhin Seam 100.0% 0.0% 100.0% % of Total 6.7% 0.0% 6.7% Tasin Teachers 0 3 3 % within Tasin 0.0% 100.0% 100.0% % of Total 0.0% 20.0% 20.0% Nakib Teachers 3 0 3 % within Nakib 100.0% 0.0% 100.0% % of Total 20.0% 0.0% 20.0% Chadni Teachers 1 0 1 % within Chadni 100.0% 0.0% 100.0% % of Total 6.7% 0.0% 6.7% Hamida Teachers 2 0 2 % within Hamida 100.0% 0.0% 100.0% % of Total 13.3% 0.0% 13.3% Kanta Teachers 1 0 1 % within Kanta 100.0% 0.0% 100.0% % of Total 6.7% 0.0% 6.7% Mahbub Teachers 1 0 1 % within Mahbub 100.0% 0.0% 100.0% % of Total 6.7% 0.0% 6.7% Aruna Teachers 3 0 3 % within Aruna 100.0% 0.0% 100.0% % of Total 20.0% 0.0% 20.0% Total Teachers 12 3 15 % within all schools 80.0% 20.0% 100.0%

302

Table D6.3 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration of teaching tasks and teacher experience Teaching behaviour Task number and experience Task duration and experience Exact p-value Cramer’s V Exact p-value Cramer’s V Orientation .34 1.00 .07 1.00 Structuring 1.00 .78 1.00 .67 Modelling 1.00 .74 .35 .91 Practice .64 .88 .35 .91 Questioning 1.00 .79 1.00 .78 CE (I) .64 .88 1.00 .80 CE (CCG) .60 1.00 .91 .78 CE (MB) .60 1.00 .60 1.00 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment(misbehaviour).

Table D6.4 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration of teaching tasks and teacher qualifications Teaching behaviour Task number and qualification Task duration and qualification Exact p-value Phi value Exact p-value Phi value Orientation .48 -.21 .56 -.21 Structuring 1.00 .11 1.00 -.08 Modelling .52 -.32 .57 -.26 Practice .52 -.32 .57 .26 Questioning .61 .19 .23 .43 CE (I) 1.00 .02 .28 -.34 CE (CCG) 1.00 .16 1.00 .24 CE (MB) 1.00 .16 1.00 .16 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment(misbehaviour).

303

Table D6.5 Fisher’s exact p-value and Cramer’s V statistics for number of tasks and duration of teaching tasks and teacher self-efficacy Teaching behaviour Task number and Task duration and self-efficacy self-efficacy Exact p-value Cramer’s V Exact p-value Cramer’s V Orientation 1.00 .24 .34 .43 Structuring .62 .29 .31 .44 Modelling .09 .81 .58 .65 Practice 1.00 .25 .65 .45 Questioning .35 .89 .61 .84 CE (I) 1.00 .66 1.00 .74 CE (CCG) .53 .68 1.00 .42 CE (MB) .53 .68 .53 .68 Note. CE (I) refers to classroom environment (teacher-student interaction), CE (CCG) refers to classroom environment (competition, cooperation, group work), CE (MB) refers to classroom environment(misbehaviour).

304

Appendix D7: Confirmatory factor analysis results

Figure D7.1. Standardised parameter estimates of hypothesised one-factor CFA model of

structuring

Note. For annotations, cli refers to clasroom environment (teacher-student interaction).

Figure D7.2. Standardised parameter estimates of hypothesised one-factor CFA model of classroom environment (teacher-student interaction)

305

Figure D7.3. Standardised parameter estimates of hypothesised one-factor CFA model of modelling

Figure D7.4. Standardised parameter estimates of hypothesised one-factor CFA model of questioning

306

Note. For annotations, clmb refers to classroom environment (misbehaviour).

Figure D7.5. Standardised parameter estimates of hypothesised one-factor CFA model of class environment (misbehaviour)

Figure D7.6. Standardised parameter estimates of hypothesised one-factor CFA model of practice

307

Figure D7.7. Standardised parameter estimates of hypothesised one-factor model of time management

Note. For annotations, clessi refers to clasroom environment (student-student interaction).

Figure D7.8. Standardised parameter estimates of hypothesised one-factor CFA model of classroom environment (student-student interaction) model

308

Figure D7.9. Standardised parameter estimates for hypothesised one-factor CFA model of assessment

Figure D7.10. Standardised parameter estimates for one-factor CFA model of questioning

309

Note. For annotations, clmb refers to classroom environment (misbehaviour), struct refers to structuring, quest refers to questioning, cli refers to clasroom environment (teacher-student interaction), and model refers to modelling.

Figure D7.11. Standardised parameter estimates for hypothesised five-factor CFA model of teaching behaviours

310

Note. For annotations, clmb refers to classroom environment (misbehaviour), struct refers to structuring, quest refers to questioning, cli refers to clasroom environment (teacher-student interaction), and model refers to modelling.

Figure D7.12. Standardised parameter estimates for five-factor CFA model of teaching behaviours

311

Table D7.1 Item-factor specifications for final five-factor CFA model of teaching behaviours Teaching Item Statement factor Structuring 1 In Mathematics, we start the lesson with things that are easy to understand. As the lesson goes on what we cover is more difficult. 4 My teacher helps us to understand how different activities (such as exercises, subject matter) during a lesson are related to each other. 7 When the teacher is teaching, I always know what part of the lesson (beginning, middle, end) we are in. 10 Our teacher has good ways of explaining how the new things we are learning are related to things we already know. 34 We spend time at the end of the lesson to go over what we have just learned. CE(I) 13 The teacher immediately comes to help me when I have problems doing an activity. 16 The teacher gives all pupils the chance to take part in the lesson. 20 Our teacher encourages us to ask questions if there is something that we do not understand during the lesson. 21 During the lesson, our teacher encourages and tells us that we are doing good work (i.e. she/he says to us ‘well done’). 26 When one of the pupils in the class is having difficulties with the lesson, our teacher goes to help him/her straight away. 37 During a Mathematics lesson, our teacher asks us to give our own opinion on a certain issue. CE(MB) 30 When a pupil gives a wrong answer in Mathematics class some of the other children in the class make fun of her/him. 33 When the teacher talks to a pupil after they have been naughty, sometimes after a while, that pupil will be naughty again. Questioning 24 When a pupil gives a wrong answer the teacher helps her/him to understand her/his mistake and find the correct answer. 38 Our teacher asks us questions at the beginning of the lesson to help us remember what we did in the previous lesson. 40 When we do not understand a question, our teacher says it in a different way so we can understand it. 42 When I give a wrong answer to a question the teacher helps me to understand my mistake and find the correct answer. 43 Our teacher praises all pupils the same when we answer a question correctly. Modelling 44 When we have, problem solving exercises and tasks in Mathematics lessons, our teacher helps us by showing us easy ways or tricks to solve the exercises or tasks. 45 Our teacher lets us use our own easy ways or tricks to solve the exercises or tasks we have in Mathematics. 46 In Mathematics lessons, our teacher teaches us ways or tricks that can be used in different lessons. 47 Our teacher encourages us to find ways or tricks to solve the exercises or work she/he gives us.

312

Appendix D8: Modified developmental levels of teaching skills and checklist of criteria

Table D8.1 Modified developmental level of teaching skills Level Descriptor Dimension Teaching behaviour

1 Basic elements of direct teaching Frequency Structuring Practice Questioning CE (teacher initiated) 2 Incorporating aspects of quality in Stage Structuring direct teaching and touching on Quality Practice active teaching Stage Questioning Frequency CE (student initiated) Focus Practice Stage Practice Quality Questioning 3 Acquiring quality in active/direct Stage CE (teacher initiated) teaching Stage CE (student initiated) Frequency Modelling Frequency Orientation Focus CE (student initiated) Quality Questioning (feedback) Focus Questioning Focus CE (teacher initiated) Quality Structuring 4 Differentiation of teaching Differentiation Structuring Differentiation Questioning Differentiation Practice Stage Modelling Stage Orientation 5 Achieving quality and differentiation Quality CE (teacher initiated) in teaching using different Quality CE (student initiated) approaches Differentiation CE (teacher initiated) Differentiation CE (student initiated) Focus Orientation Quality Orientation Differentiation Orientation Quality Modelling Focus Modelling

313

Table D8.2 Checklist of criteria for determining developmental level of teaching skills

Level Level description Dimension Teaching behaviour Specific behaviour Criteria

1 Basic elements of Frequency Structuring At least one task observed direct teaching Practice Questioning CE(TI) 2 Putting aspects of Stage Structuring Observed in any stage except ‘transition’ quality in direct Practice Observed in any stage teaching and touching Questioning Observed in more than one stage on active teaching Focus Practice At least related to task Quality Questioning Question type Either product or process questions Practice Either similar or complex Frequency CE(SI) At least one task observed 3 Acquiring quality in Quality Structuring Clear to students active/direct teaching Questioning Answer feedback Comment on answer

Positive to correct answer, or constructive to Student feedback incorrect or partly correct answer

Frequency Orientation At least one task observed Modelling At least one task observed Stage CE(TI) Observed in more than one stage CE(SI) Either student question/speech/collaboration observed in more than one stage Focus Questioning At least related to task

CE(TI) Learning

Learning in student CE(SI) question/speech/collaboration

314

Table D8.2 continued Level Level description Dimension Teaching behaviour Specific behaviour Criteria

4 Differentiation Differentiation Structuring Observed in one task of teaching Questioning Observed in one task

Practice Observed in one task Stage Orientation Observed in more than one stage Modelling Observed in more than one stage 5 Achieving Focus Orientation At least related to lesson quality and Modelling At least related to lesson differentiation in teaching Quality Orientation Students can specify aim of task or lesson using different Modelling Problem solving strategy Teacher directs or guides students to solution approaches Appropriate Students correctly solve problems. CE(TI) Students are on task in exception of social interactions CE(SI) Students are on task during student question/speech/collaboration Differentiation Orientation Observed in one task Modelling Observed in one task CE(TI) Observed in one task CE(SI) Observed in one task

Notes. CE refers to classroom environment. CCG refers to competition, cooperation, group work. Note. CE (TI) refers to classroom environment (teacher-student interaction) which is teacher initiated, CE (SI) refers to classroom environment (teacher-student interaction) which is student initiated

315

Table D8.3 Number of teachers at each developmental level of teaching skill (n=15) Teaching Level 1 Level 2 Level 3 Level 4 Level 5

Behaviour Dimension Teachers Dimension Teachers Dimension Teachers Dimension Teachers Dimension Teachers (number) (number) (number) (number) (number) Orientation Frequency 15 Stage 11 Focus 10 Quality 8 Differentiate 0 Structuring Frequency 14 Stage 14 Quality 12 Differentiate 0 Modelling Frequency 15 Stage 13 Focus 13 Quality 7 Practice Frequency 14 Stage 14 Differentiate 0

Focus 14

Quality 14 Questioning Frequency 9 Stage 7 Focus 7 Quality 7 Quality 7 Differentiate 0 (quest. type) (feedback) CE(TI) Frequency 15 Stage 15 Quality 15 Focus 15 Differentiate 0 CE(SI) Frequency 15 Stage 9 Quality 6 Focus 9 Differentiate 0 Notes. CE (TI) refers to classroom environment (teacher-student interaction) which is teacher initiated, CE (SI) refers to classroom environment (teacher-student interaction) which is student initiated. quest. refers to question.

316

Appendix D9: Teacher interview results

Table D9.1 Teacher self-reported orientation behaviours Theme Sub-theme Category Number Teacher/s (n=15)

Frequency Goal setting 15 All Focus Performance goals Number 15 All 1-3 goals 12 Shapla, Moni, Nila, Adnan, Saddam, Angel, Momota, Bela, Hazzaz, Bindu, Kazol, Shilpo 4-6 goals 3 Antu, Babu, Priya Mastery goals Number 3 Momota, Hazzaz, Kazol 1-3 goals 3 Quality Students 15 All Understand goals Assist students to Use student 2 Nila, Hazzaz understand goals ideas Provide 3 Nila,Bindu, Momota overview Prior 2 Angel, Hazzaz learning Use 4 Hazzaz, Antu, Babu, Kazol analogies, examples, questions Use visuals 1 Priya Not 7 Shapla, Bela, Shilpo, mentioned Moni,Adnan,Saddam, Momota Check students Practice 4 Moni, Adnan, Momota,Bela understand goals activities Questioning 9 Moni,Nila, Saddam, Hazzaz, Antu, Bindu, Babu,Priya,Kazol Assumed (no 1 Angel checking)

Not 2 Shapla, Shilpo mentioned

317

Table D9.2 Teacher self-reported structuring behaviours Theme Sub-theme Category Sub-category Number Teacher/s (n=15) Frequency Set 15 All structure Focus Planning Organisation Whole unit to 3 Shapla, Moni, lessons Hazzaz Learning material 1 Hazzaz Content 4 Moni, Saddam, progression from Hazzaz,Bindu simple to difficult Teaching Orientation Check prior 2 Bela, Hazzaz strategies learning Link lesson to 2 Bela, Hazzaz prior learning Present overview, 11 Adnan, Saddam, lesson goal Angel, Momota, Bela, Hazzaz, Antu, Babu, Priya,Kazol, Shilpo Presentation Present content 14 All except Shapla

Demonstrate 14 All except Shapala problem solving Application Practice activities 6 Adnan, Bela, Babu, and assessment Priya, Kazol,Shilpo Homework 6 Adnan, Bela, Babu, Priya, Kazol,Shilpo Review Summarise lesson 4 Adnan, Babu, Priya, Kazol Quality Students 15 All understand Check Practice 6 Moni, Nila, students activities Momota, Bela, understand Antu,Kazol Questioning 4 Moni,Nila, Saddam, Angel Assumed Students familiar 1 Babu with the structure due to long term learning experience with teacher Easy to understand 1 Priya as same structure applied in all lesson. Not 5 Shapla, Adnan, mentioned Hazzaz, Bindu, Shilpo

318

Table D9.3 Teacher self-reported modelling behaviours Theme Sub-theme Category Sub- Number Teacher/s category (n=15) Frequency Modelling 15 All Focus Demonstration Present 15 All Prior 4 Angel, Momota, Bela, learning Bindu link Use 3 Moni, Antu, Babu examples Use 3 Saddam, Momota, questions Kazol

Student’ 4 Antu, Babu, Priya, participation Kazol

Application Practice Seatwork or 2 Adnan, Bela group work to solve problems. Quality New learning 15 All

Appropriate Student Problem 13 Shapla, Moni,Nila, modelling successfully solving Adnan, Saddam, apply Angel,Momota, Bela,Antu, Bindu, Babu, Kazol, Shilpo Answering 10 Nila, Adnan, questions Saddam,Bela,Hazzaz, Antu, Bindu, Babu,Priya, Kazol

319

Table D9.4 Teacher self-reported practice behaviours Theme Sub- Category Number Teacher/s theme (n=15)

Frequency Activities 15 All

Focus Rationale Mastery, deep 4 Moni, Adnan, Bindu, Priya learning Check for 13 Nila, Adnan, Saddam, Angel, Momota, understanding Bela, Hazzaz,Antu, Bindu,Babu, Kazol,Shilpo Engage active 5 Moni, Nila, Adnan, Antu, Babu learning, peer- learning& cooperation Assess 1 Kazol teaching Notstrategies mentioned 1 Shapla Quality Types Individual 8 Moni, Nila, Saddam,Angel,Momota, seatwork Bela,Hazzaz, Bindu

Group work 8 Shapla, Moni, Adnan,Antu,Babu, Priya, Kazol,Shilpo Similar tasks 12 Moni,Adnan,Saddam,Angel,Momota, Bela,Hazzaz,Antu,Babu,Priya,Kazol, Shilpo

Complex tasks 3 Saddam,Bindu,Kazol Homework 3 Hazzaz,Kazol,Shilpo

320

Table D9.5 Teacher self-reported questioning behaviours Theme Sub-theme Category Sub-category Number Teacher/s (n=15) Frequency Type Product or Mixed 14 All except Shapla process Not mentioned 1 Shapla Focus Factors Student 14 All except Shapla understanding Curriculum 3 Adnan,Nila,Moni

Not mentioned 1 Shapla Rationale Product Evaluation 14 All except Shapla Motivation 1 Nila Enhance 1 Antu confidence Process Enhance deep 14 All except Shapla learning Not mentioned 1 Shapla Quality Student All of them 12 Shapla,Moni,Nila understanding understand Adnan, Saddam, Angel,Bela, Hazzaz,Bindu, Babu,Priya,Kazol Not all Depends on 3 Momota, Antu, understand student level Shilpo Assist student Repeat, 4 Shapla,Momota, understanding reword Antu,Shilpo Content 7 Moni,Nila,Adnan related Saddam,Bindu, Babu,Kazol Question 2 Angel,Hazzaz clarity Not mentioned 2 Bela,Priya Confirm Assumed 12 Shapla,Moni, student understanding Nila,Adnan, understanding Saddam,Angel, Bela, Bindu, Hazzaz, Babu, Priya, Kazol Context Student 2 Momota, Antu dependent understanding Not mentioned 1 Shilpo Teacher Partial answer Discussion to 4 Moni,Bela, response elicit answer Babu, Priya No answer Provide 4 Moni,Bela, answer or Babu, Priya probe