EXPLORING COACH-TEACHER INTERACTIONS WITHIN A PRACTICE-BASED COACHING PARTNERSHIP

By

DARBIANNE SHANNON

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2017

© 2017 Darbianne Shannon

To seeking important growth opportunities: “Do the best you can until you know better. Then when you know better, do better.” ~Maya Angelou

ACKNOWLEDGMENTS

I would first like to thank the members of my doctoral committee. I am grateful for my chair, Dr. Elizabeth Bondy, for her passion to learn both with and from her students.

Her support and thoughtful questions have made me consider the precision and meaning of my words more deeply. I am also immensely thankful for my co-chair, Dr.

Patricia Snyder, for her wisdom, constructive feedback, guidance, and confidence in me and my work. Her model of scholarship, service, and advocacy for teachers, young children, and their families has helped me to stay focused on why the integrity of our work is of the utmost importance. I appreciate Dr. Nancy Dana for sharing her insight and experience in the field of professional development, with a particular focus on supporting teachers’ self-inquiry regarding current practice and integration of new knowledge and skills into their teaching repertoire. I thank Dr. Mary McLean for her ongoing encouragement, knowledge, and dedication to making recommended and evidence-based practices accessible to all stakeholders who support families of young children with disabilities.

I would like to express my gratitude to the Institute of Education Sciences for their continued support of the Impact of Professional Development on Preschool

Teachers’ Use of Embedded Instruction Practices work (R324A07008 and

R324A150076) and to Dr. Patricia Snyder, the principal investigator, for allowing me to access a portion of the coaching database for my dissertation. Without the support of the partner school districts, teachers, children, and families who participated in the study, this research would not be possible.

I am sincerely grateful for the opportunity to work at the Anita Zucker Center for

Excellence in Early Childhood Studies at the . Anita Zucker’s vision

4

and commitment to interdisciplinary scholarship with the capacity to impact research, policy, practice, and the lives of young children and their families is exceptional. The mentorship provided by each of the faculty and the applied research opportunities I experienced through the Center have shaped my knowledge, beliefs, and professional goals.

I would also like to thank my current and former colleagues in the Snyder research lab at the University of Florida for their friendship and encouragement. My doctoral program would not have been the same without the opportunity to collaborate with and learn alongside each of them. I owe a special thanks to my secondary coder

Lauren Buroker for her conscientious questions and dedication to learning the coding system. I am also truly appreciative for the embedded instruction coaching team at the

University of Florida and Vanderbilt University, for their collaboration, thoughts, resources, feedback, and dedication to supporting our coaching team, teachers, and the children in their classrooms.

Finally, I am most grateful for the love, support, and encouragement of my family and close friends. They have always believed in me, nurtured my passion for education, and demonstrated for me how informed actions, collaboration, quiet dedication, and perseverance can make a difference in the community and the lives of others.

5

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ...... 4

LIST OF TABLES ...... 10

LIST OF FIGURES ...... 12

LIST OF ABBREVIATIONS ...... 13

ABSTRACT ...... 14

CHAPTER

1 INTRODUCTION ...... 16

Rationale for the Study ...... 20 Dimensions of Follow-Up Support ...... 21 Empirical Efforts to Report the Process Dimension ...... 22 Practice-based Coaching ...... 23 Measuring Fidelity of Implementation of PBC ...... 27 Teacher and Child Outcomes Associated with PBC ...... 28 Embedded Instruction Practices and PBC ...... 29 Purpose Statement ...... 30 Significance ...... 32 Delimitations ...... 34 Limitations ...... 35 Summary ...... 36

2 REVIEW OF THE LITERATURE ...... 38

Search Procedures ...... 39 Early Childhood Professional Development ...... 40 Defining Professional Development ...... 42 Aligning Professional Development Inputs and Outcomes ...... 43 Features of Quality Professional Development ...... 44 Trends in Early Childhood Professional Development Research ...... 45 Use of systematic follow-up support ...... 46 Trends in studies classified as coaching ...... 46 Studies with PBC components ...... 48 Reporting limitations in systematic follow-up implementation support ...... 49 Recommendations for enhanced reporting ...... 49 Summary of Early Childhood Professional Development ...... 50 Implementing PBC in Early Childhood Contexts ...... 50 Evidence for the PBC Components ...... 51 Teaching practices ...... 51

6

Collaborative partnership ...... 52 Strengths and needs assessment to inform shared goal setting and action planning ...... 52 Focused observation ...... 53 Reflection and feedback ...... 53 Intervention Studies Implementing Practice-based Coaching ...... 54 Teacher outcomes ...... 54 Child outcomes ...... 56 Summary of Practice-based Coaching Literature ...... 56 Coaches as Mediators of Teacher’s Coaching Experiences ...... 56 Using Observational Coding Systems to Describe Coach Behavior ...... 57 Empirical Approach of Studies Using Observational Coding Systems ...... 57 Gaps and Recommendations for Future Studies ...... 60 Summary of Observational Coding Systems ...... 60

3 METHODOLOGY ...... 73

Research Questions ...... 74 Context for the Present Study ...... 75 Introduction of the Teaching Practices through Workshops ...... 76 On-site Practice-based Coaching ...... 78 Across coaching sessions ...... 78 Within each coaching session ...... 80 The Coaching Practices Observation Tool ...... 81 Step 1–Reviewing the Coaching Literature ...... 82 Step 2–Operationally Defining Coaching Process ...... 82 Step 3–Applying the Codes to Video ...... 83 Step 4–Conducting a Pilot Study of the CPOT-PV2 ...... 84 Step 5–Piloting the Revised CPOT-PV2 System Using Observational Coding Software ...... 86 Measures ...... 87 Coaching Practices Observation Tool ...... 87 Demographic Questionnaires ...... 88 Description of the classroom and teacher profile ...... 88 Description of the project staff demographic form ...... 89 PBC Weekly Coaching Session Data ...... 89 Coaching dose ...... 89 Coaching fidelity ...... 90 Action plans ...... 91 Teaching practices matrix ...... 91 Participants ...... 91 Coach Training and Support Procedures ...... 92 Training ...... 92 On-going implementation support for coaches ...... 92 Fidelity feedback ...... 93 Teacher Recruitment ...... 93 Procedures ...... 94

7

Video Coder Training Procedures ...... 94 Selection of Coaching Debrief Videos ...... 95 Coding Procedures ...... 96 Counting coach verbal behavior ...... 97 Duration of conversation focus and initiation...... 97 Interobserver Agreement ...... 98 Data Extraction and Analyses ...... 99 Conversation focus ...... 99 Initiations ...... 100 Overall verbal behavior ...... 101 Verbal behavior by conversation focus ...... 102 Coach differences ...... 102

4 RESULTS ...... 116

Participant Demographics ...... 117 Coaching Implementation Variables ...... 118 Interobserver Agreement ...... 119 Conversation Focus ...... 119 Research Question 1: Proportion of Time Spent in Each Conversation Focus ...... 120 Research Question 1a: Coach Differences in Percent of Time Spent in Each Conversation Focus ...... 120 Research Question 2: Proportion of Time Spent in Each Conversation Focus Across Occasions ...... 121 Research Question 2a: Coach Differences in Proportion of Time Spent in Each Conversation Focus Across Occasions ...... 121 Research Question 3: Coach versus Teacher Initiations ...... 122 Research Question 3a: Proportion of Coach and Teacher Initiations by Conversation Focus ...... 123 Research Question 3b: Coach Differences in the Proportion of Coach and Teacher Initiations by Conversation Focus ...... 123 Research Question 4: Proportion of Coach and Teacher Initiations Across Occasions ...... 124 Research Question 4a: Proportion of Coach and Teacher Initiations Across Time by Conversation Focus ...... 124 Research Question 4b: Coach Differences in the Proportion of Coach and Teacher Initiations Across Occasions ...... 125 Coach Verbal Behavior ...... 125 Research Question 5: Rate of Coach Verbal Behavior ...... 126 Research Question 5a: Coach Differences in Rate of Verbal Behavior ...... 126 Research Question 6: Rate of Coach Verbal Behavior Across Occasions ..... 127 Research Question 6a: Coach Differences in Rate of Verbal Behavior Across Occasions ...... 128 Research Question 7: Rate of Coach Verbal Behavior by Conversation Focus ...... 128

8

Research Question 7b: Coach Differences in Rate of Verbal Behavior by Conversation Focus ...... 129 Coach-Teacher Dyads ...... 129 Coach and Teacher Dyad 1 ...... 130 Coach and Teacher Dyad 2 ...... 131 Coach and Teacher Dyad 3 ...... 133 Coach and Teacher Dyad 4 ...... 135 Coach and Teacher Dyad 5 ...... 137 Coach and Teacher Dyad 6 ...... 139 Coach and Teacher Dyad 7 ...... 141

5 DISCUSSION ...... 175

Interpretation of Findings ...... 175 Fidelity of Implementation ...... 175 Coaching Implementation Quality ...... 179 Conversation foci ...... 180 Verbal behavior ...... 183 The Influence of Demographics on Conversation Foci and Verbal Behavior . 193 Dyadic Transactions Across the Collaborative Partnership ...... 195 Conversation foci ...... 197 Coach-teacher initiations ...... 198 Building Teacher’s Confidence and Competence Towards Sustainability ...... 200 Implications from the Present Study ...... 201 Implications for Coach-based Professional Development ...... 201 Methodological Implications for Defining and Describing “Coaching” ...... 204 Recommendations for Future Research ...... 206 Overall Summary ...... 209

LIST OF REFERENCES ...... 213

BIOGRAPHICAL SKETCH ...... 224

9

LIST OF TABLES

Table page

2-1 The “who”: recipients of systematic follow-up...... 62

2-2 The “who”: providers of systematic follow-up...... 63

2-3 The “what” of follow-up...... 64

2-4 The “how” of systematic follow-up...... 65

2-5 Coaching studies aligned with PBC coaching components or strategies...... 66

2-6 Empirical evidence for practice-based coaching...... 67

3-1 Percentage of total time in each instructional learning format for cohort 1 fall modules across sites...... 104

3-2 Coaching practices observation tool-pilot version 1...... 105

3-3 Coaching practice observation tool-pilot version 2...... 106

3-4 Coaching practices observation tool-research version 1.0 operational definitions...... 107

3-5 Cohort 1 Coaching fidelity by debrief session type for larger study...... 109

3-6 Available video footage from larger study for included dyads...... 110

4-1 Coach demographics...... 145

4-2 Teacher demographics...... 146

4-3 Comparison of implementation of select coaching protocol variables for all teachers coached by each coach and for each teacher coached in the present study ...... 147

4-4 Duration of conversation focus with initiation modifier occurrence agreement by code...... 148

4-5 Verbal behavior occurrence agreement by code...... 149

4-6 Percent occurrence agreement and Cohen’s kappa by coaching debrief meeting video...... 150

4-7 Percent of time in each conversation focus...... 151

4-8 Percent of time in each conversation focus by dyad...... 152

10

4-9 Percent of time in each conversation focus for each dyad across occasions. .. 153

4-10 Percent initiations by coaches and teachers by conversation foci...... 155

4-11 Percent of coach and teacher initiations by conversation focus...... 156

4-12 Percent of coach and teacher initiations across occasions...... 157

4-13 Percent of coach and teacher initiations across time...... 158

4-14 Rate of coach verbal behavior per 30 minutes...... 159

4-15 Coach differences in rate of coach verbal behavior per 30 minutes...... 160

4-16 Rate of coach verbal behavior per 30 minutes across occasions...... 161

4-17 Rate of coach verbal behavior within each conversation focus per 30 minutes...... 162

4-18 Coach 1 rate and type of verbal behavior per 30 min by conversation focus. .. 163

4-19 Coach 2 rate and type of verbal behavior per 30 min by conversation focus. .. 164

4-20 Coach 3 rate and type of verbal behavior per 30 min by conversation focus. .. 165

4-21 Coach 4 rate and type of verbal behavior per 30 min by conversation focus. .. 166

4-22 Coach 5 rate and type of verbal behavior per 30 min by conversation focus. .. 167

4-23 Coach 6 rate and type of verbal behavior per 30 min by conversation focus. .. 168

4-24 Coach 7 rate and type of verbal behavior per 30 min by conversation focus. .. 169

11

LIST OF FIGURES

Figure page

1-1 Components of the practice-based coaching framework...... 37

3-1 Phases of the coaching partnership...... 111

3-2 Coaching protocol components by session...... 112

3-3 A page from the coaching practices observation tool-pilot version 1...... 113

3-4 Coded transcript of a coaching debrief meeting...... 114

3-5 Video sampling for present study...... 115

4-1 Percent of total time in each conversation foci...... 170

4-2 Percent of time within each conversation foci by coach...... 171

4-3 Percent of total time in each conversation foci across time...... 172

4-4 Rate of coach verbal behavior per 30 minutes across time...... 173

4-5 Rate of coach verbal behavior per 30 minutes within each conversation foci. .. 174

12

LIST OF ABBREVIATIONS

CPOT-RVI Coaching Practices Observation Tool Research Version 1.0

EBP Evidence-based Practice

ECPDD Early Childhood Professional Development Database

EI Early Intervention

FGRBI Family Guided Routines Based Intervention

IES Institute of Education Sciences

IFSP Individualized Family Support Plan

NICHD National Institute of Child Health and Human Development

NPDCI National Professional Development Center on Inclusion

NRC National Research Council

PBC Practice-based Coaching

PD Professional Development

PRISMA Preferred Reporting Items for Systematic Reviews and Meta Analyses

TfT Institute of Education Sciences Goal 3 efficacy trial of Tools for Teachers

13

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

EXPLORING COACH-TEACHER INTERACTIONS WITHIN A PRACTICE-BASED COACHING PARTNERSHIP

By

Darbianne Shannon

August 2017

Chair: Elizabeth Bondy Co-chair: Patricia Snyder Major: Curriculum and Instruction

Professional development (PD) that includes coaching has demonstrated the capacity to effect teachers’ use of evidence-based practices (EBP) in their classrooms.

Not all studies involving coaching have reported positive outcomes, however.

Researchers have called for additional investigations that define and describe the

“active ingredients” of coaching, including content, structure, dose, and coaching processes that facilitate teachers’ acquisition of knowledge and skills.

The present study explored how coaches facilitated conversations during the debrief meeting component of an onsite Practice-based Coaching (PBC) partnership. A direct behavioral observation system was developed to investigate (a) the proportion of time spent in different conversational foci, including who initiated the conversation; (b) the verbal behaviors used by coaches; and (c) whether the conversation foci, initiations, and coach verbal behavior changed across three occasions for seven coach-teacher dyads. Data for the present study were collected as a part of a larger efficacy trial examining Tools for Teachers, a PD intervention designed to support classroom

14

teachers’ use of embedded instruction practices with young children with disabilities

(Snyder, Algina, Hemmeter, & McLean, 2015).

Results from the present study indicate coaches (a) spent the largest proportion of time in reflection and feedback and goal setting and action planning conversation foci and (b) used the study required verbal behaviors (supportive and constructive feedback and clarifying questions) across all sampled occasions. These findings are consistent with the PBC framework and associated protocols that guided the debrief meeting.

Dyads exhibited variability in the proportion of time spent in each conversation foci, coach versus teacher initiations, and the rate of coach verbal behavior. Some variability was related to participant characteristics. Change in coach and teacher behavior across three sampled occasions was evident.

Findings from the present study support the importance of operationally defining the components of coaching and the actions or behaviors of coaches related to these components. Further exploration of coach-teacher transactional interactions and their contributions to the collaborative coaching partnership are needed to advance the science of coach-based professional development and to make decisions about (a) accommodations and adaptations for individual teachers, and (b) thresholds of acceptable modification within structured coaching frameworks.

15

CHAPTER 1 INTRODUCTION

Interdisciplinary advances in neuroscience, economic, and education research have demonstrated high-quality early childhood experiences benefit the individual child and the community (Yoshikawa et al., 2013). High-quality early childhood experiences are associated with improved health, cognition, and social-emotional development later in life, providing the greatest benefit to children who are at-risk due to environmental or biological risk factors (Shonkoff, 2010). Preschool teachers1 are critical mediators of children’s early experiences. Yet, the entry-level and professional development (PD) requirements for teaching in state-funded preschool programs vary widely. In the 2015-

2016 school year, the National Institute for Early Education Research (NIEER) reported preschool programs existed in 44 of the United States, but only 25 states required a bachelor’s degree for lead teachers (Barnett et al., 2017). NIEER also found 37 states required inservice training of 15 hours or more per year; however, the quality of these training experiences has been described as inconsistent and rarely evaluated by the state for the impact it has on classroom practice (Barnett et al., 2017).

A survey of 3,750 early childhood educators, conducted by the National

Association for the Education of Young Children (NAEYC) in 2015, found 40% of teachers reported a lack of mentoring and guidance to be a critical barrier to their success in the profession, while 32% reported difficulty accessing relevant PD opportunities (NAEYC, 2015). The confidence and competence of teachers to provide

1 Teacher is used here to represent the adult who is responsible for providing interactional and instructional support to the child. In the early childhood context, this person may or may not hold a formal degree or teaching certificate.

16

high-quality preschool learning experiences for young children is critical for ensuring that federal and state funding allocated to increasing access to preschool education for more than 1.4 million children enrolled in preschool programs, as of the 2015-2016 school year, produces optimal child outcomes.

When seeking to support teachers’ implementation of evidence-based practices

(EBP), multi-component interventions, or curricula associated with high-quality early learning experiences, the PD supports provided to teachers should be informed by theory and research. Teacher PD is typically characterized as occurring along a continuum from preservice to inservice. Preservice PD experiences occur prior to the teacher’s entry into the profession or classroom and often result in a degree or certification from an institution of higher education. In contrast, inservice PD is designed to enhance the knowledge and practice of teachers who are already working with children and families in classrooms or other settings. Teachers working with young children birth to age five in the United States often enter the profession with little or no preservice training, making inservice training critical for increasing the quality of education and care (Gomez, Kagan, & Fox, 2015). For the purpose of the present study, PD is defined as “facilitated teaching and learning experiences designed to enhance practitioners’ knowledge, skills, and dispositions as well as their capacity to provide high quality early learning experiences for young children” (Snyder et al., 2012, p.188). Furthermore, the present study and literature presented is primarily focused on inservice PD, the context where the study data were collected.

Recent reviews of the PD literature have identified key features most likely to support teachers’ dispositions towards, and knowledge and implementation of, EBP in

17

the early childhood context (e.g., Artman-Meeker, Fettig, Barton, Penney, & Zeng, 2015;

Diamond, Justice, Siegler, & Snyder, 2013; Schacter, 2015; Sheridan, Edwards, Marvin,

& Knoche, 2009; Snyder et al., 2012; Trivette, Dunst, Hamby, & O’Herin, 2009; Winton,

Snyder, & Goffin, 2015; Zaslow, Tout, Halle, Whittaker, & Lavelle, 2010). These features include (a) the application of adult learning principles; (b) an explicit statement of the learning objective(s); (c) a focus on teaching practices that link research, theory, and practice; (d) opportunities for active participation including planning, practice with feedback, and self-evaluation; (e) content that is aligned with teachers’ professional values and context standards; and (f) the intensity of the PD, including the alignment between follow-up support and the desired PD outcomes.

The PD literature has also established that episodic and disjointed workshops generally are not effective for facilitating sustained learning or development in teachers’ knowledge, practices, or dispositions (Snyder, Hemmeter, & McLaughlin, 2011). A meta-analysis of PD studies delivered in K-12 settings, conducted by Joyce and

Showers (2002), suggested that theory and discussion, plus demonstrations in training, plus practice and feedback in training, plus coaching in the classroom are most likely to lead to teachers’ use of practices in classrooms. An estimated 95% of participants in the studies analyzed achieved the desired outcome of practice use in the classroom when these components were used in combination. These findings and more contemporary applications of this research suggest workshops may continue to be a valuable component of PD packages for delivering content and achieving knowledge- and application-based outcomes, when followed by more individualized, job-embedded

18

supports such as mentoring, consultation, and coaching to enhance implementation of knowledge and skills in practice settings (cf. Trivette et. al, 2009).

A number of studies conducted with preschool teachers have demonstrated that teachers who receive inservice learning experiences plus job-embedded support in their classroom are more likely to improve their practice implementation when compared to

(a) teachers who received materials, workshops, or coursework in the absence of job- embedded support (e.g., Neuman & Cunningham, 2009; Neuman & Wright, 2010;

Pianta, Mashburn, Downer, Hamre, & Justice, 2008), and (b) teachers who participated in more traditional PD provided by their district, grantee, agency, or program (e.g.

Greenwood, Abbot, Beecher, Atwater, & Petersen, 2017; Landry, Swank, Anthony &

Assel, 2009; Neuman & Cunningham, 2009; Powell, Diamond, Burchinal, & Koehler,

2010; Powell, Steed, & Diamond, 2010; Sutherland, Conroy, Vo, & Ladwig, 2015;

Hemmeter, Snyder, Fox, & Algina, 2016).

The authors in these studies classified job-embedded support using various terms including mentoring, consultation, and coaching; however, the activities associated with the support provided were inconsistent across terms. As contemporary researchers investigate components of early childhood PD most likely to promote teachers’ implementation of EBP and child outcomes, it is critically important to move beyond vague terminology toward a more explicit description of (a) what the focus and desired outcomes of the PD experience were, (b) who the participants are and the context in which they work, and (c) how participants were supported to achieve the desired outcomes (National Professional Development Center on Inclusion; NPDCI,

2008). The purpose of the present study was to explore and describe how coaches

19

facilitated coach-teacher interactions within a Practice-based Coaching (PBC) framework to enhance teachers’ knowledge of and capacity to implement EBPs in public preschool classrooms.

Rationale for the Study

Substantial evidence suggests high-quality early learning experiences are contingent on teachers who are well prepared at entry and supported by ongoing PD.

Furthermore, intentional alignment between the desired outcomes of PD and the type and intensity of support provided is necessary for programs to use limited resources efficiently and for improving teacher quality (Harris, 1980; McCollum & Cattlet, 1997;

Snyder & Wolfe, 2008; Winton et al., 2015; Zaslow et al., 2010). Recent reviews of the

PD literature provide evidence that workshops in combination with follow-up implementation support are increasingly prevalent, where enhanced knowledge and classroom practice is the desired outcome. For example, Snyder et al. (2012) found in a literature review of early childhood PD studies that 74% of the 256 studies included some type of follow-up support, with a larger proportion of studies using follow-up support occurring in the last 10 years of the review. Early childhood PD research has made an important shift toward more empirically sound approaches to scaffolding teachers’ learning using job-embedded follow-up implementation supports. However, follow-up supports are often not well defined in the available literature, with important details about content, structure, and process dimensions being underreported (Powell &

Diamond, 2013, Snyder et al. 2011).

The absence of detailed reporting presents two significant challenges: (1) replication and meta-analytic studies cannot be conducted, preventing the accumulation of knowledge regarding how to effectively and efficiently use PD and coaching

20

resources; and (2) readers might be misled about the fiscal and human resources required to achieve similar outcomes, preventing the successful installation and scaling up of promising PD interventions. The argument for more comprehensive and thorough reporting of each dimension of the PD intervention variable, including workshops and follow-up supports, is well documented as the field seeks to advance the science of PD implementation (Halle, Metz, & Martinez-Beck, 2013, Snyder et al., 2011), including implementation supports such as coaching (Artman-Meeker et al., 2015; Isner et al.,

2011; Snyder et al., 2012).

Dimensions of Follow-Up Support

Systematic follow-up to support the implementation of coaching is composed of three dimensions — content, structure, and process (Powell & Diamond, 2013b). The content dimension represents the knowledge, practices, or dispositions that are the focus of coaching. The structural dimension characterizes the number, duration, frequency, interval, and delivery format of coaching sessions, which together make up the dose and intensity of support provided. The process dimension of coaching includes the coaching behaviors, strategies, and materials used to promote the teacher’s confidence, competence, and use of an identified set of teaching practices.

The content of coaching is established through reviews of the empirical literature prior to initiating a PD intervention and often includes information about (a) what children need to know and be able to do, and (b) behaviors or practices the teacher can use to support child learning. Researchers have documented this dimension through the use of PD scripts, fidelity checklists, and printed manuals and guides for both PD trainers and teachers (e.g., Conroy et al., 2015; Neuman & Wright, 2010; Powell &

Diamond, 2013; Snyder, Hemmeter, & Fox, 2015). Similarly, the documentation of

21

structural coaching features can be accomplished through the use of coaching logs and protocols in combination with training for coaches on how to record their frequency, duration, and type of contact with teachers after it occurs (Durlak & Dupre, 2008;

Wolery, 2011; Snyder et al., 2015). In contrast, process features are guided by training on the use of coaching protocols and materials, but the composition of behaviors, strategies, and materials are often by design intended for flexible and responsive use with teachers to meet their unique classroom context, preferences, motivations, and individual learning goals (Diamond & Powell, 2013; Snyder et al., 2015). This differentiation aligns with the principles of adult learning, but presents a challenge for the accurate documentation of how various operationalized components are implemented during follow-up support interactions between coaches and teachers.

Empirical Efforts to Report the Process Dimension

Coach self-report of protocol implementation can be unreliable and ineffective, in the absence of (a) clear operational definitions for the behaviors coaches are supposed to implement, (b) opportunities to practice implementation, and (c) ongoing performance-based feedback on implementation fidelity. Researchers have begun to use direct behavioral observation systems to gain more accurate information about the process dimension of coaching. Studies employing direct behavioral observation have focused primarily on two outcomes (1) defining and describing the verbal behaviors of coaches as they interact with teachers’ and families during classroom and home-based coaching sessions and (2) examining the accuracy of coaches’ self-report data. For example, Jayaraman, Marvin, Knoche, and Bainter (2015) developed a system to describe the topography of coach verbal and gestural behaviors across home and school-based context. While Salisbury, Cambray-Engstrom, and Woods (2012) used

22

direct observation to explore the agreement between coaches’ self-documented use of coaching strategies and those observed by a trained project staff member who reviewed video-recorded coaching sessions. Salisbury and colleagues (2012) found that coaches under-reported their use of coaching strategies. In fact, no significant relationship was found between coaches’ self-report and the trained observer for five of the seven possible coaching strategies.

The use of video-based direct behavioral observation coding systems to characterize the behaviors of coaches and those being coached is somewhat new, but it has the potential to advance research focused on coaching processes by helping to define and describe the “active ingredients” of coaching. In addition, direct observation data about coaching processes could be used to explore why some forms of coaching might or might not be efficacious. Data from these systems also could be used to inform the development of initial training and the provision of ongoing supports for coaches to implement and accurately record their coaching behaviors. The data could help decision makers and program leaders to install coach-based PD systems that, when implemented with fidelity, will lead to positive teacher and child outcomes. Accurately documenting the use of coaching behaviors and correlating them with teacher and child outcomes in future studies is critical for understanding when coaching should be used, for whom, and under what conditions.

Practice-based Coaching

Practice-based Coaching (PBC) is a coaching framework that has operationally defined components and coach behaviors that when implemented with fidelity have demonstrated efficacy in supporting preschool teachers to implement and sustain their use of select EBP (Snyder, Hemmeter, & Fox, 2015). Several coaching frameworks

23

have been described in the extant literature (e.g., Knight, 2007, Rush & Shelden, 2011) but there have been few rigorous empirical studies that have systematically examined relationships among the fidelity of implementation of these coaching frameworks, teachers’ implementation of practices that are the focus of coaching, child outcomes.

Thus, studies demonstrating positive outcomes for teachers and children linked to the use of PBC are needed.

PBC is unique among approaches to coaching in the degree to which it was designed with both theoretical and empirical support. The PBC framework was developed by reviewing and aligning its components with principles from how people learn (Donovan, Bransford, & Pellegrino, 1999), organizational behavior management, applied behavior analysis (Crow & Snyder, 1998), and transactional theory (Sameroff,

2009). The empirical evidence for PBC includes a number of multi-site randomized controlled trials (e.g. Conroy et al., 2015; Hemmeter et al., 2016), quasi-experimental studies (e.g., Artman-Meeker, Hemmeter, & Snyder, 2014), and single case design studies (e.g. Fox, Hemmeter, Snyder, Binder, & Clarke, 2011; Hemmeter, Hardy,

Schnitz, Adams, & Kinder, 2015). Each of these studies have systematically measured and evaluated the degree to which coaches implemented the PBC components with fidelity and corollary relationships with teachers’ fidelity of practice implementation in the classroom setting using direct observation.

Snyder, Hemmeter, and Fox (2015) define PBC as “a cyclical process for supporting preschool practitioners’ use of effective teaching practices that lead to positive outcomes for children” (p. 2). PBC is composed of three cyclical components

(a) strengths and needs assessment to inform the development of shared goals and

24

action planning, (b) focused observation, and (c) reflection and feedback. The PBC components occur within the context of a collaborative partnership and focus on a set of interrelated teaching practices (Figure 1-1). The PBC framework is distinct from other approaches to job-embedded support and coaching in three additional ways: (1) an emphasis on a targeted set of evidence-based interactional or instructional teaching practices, (2) the use of supportive and constructive performance-based feedback, and

(3) the transactional collaborative partnership that is structured and goal-oriented while also allowing for individual adaptations and accommodations.

Teaching practices. Teaching practices are observable and measureable actions or behaviors teachers use in the classroom to support children’s development and learning. Effective teaching practices describe how the teacher will plan for, implement, and evaluate his or her teaching. PBC is a practice-ready coaching framework that can be (and has been) used to support high quality early learning environments, interactions, and instruction across a variety of developmental domains.

The teaching practices that are the focus of coaching set the occasion for the teacher to self-assess his or her strengths within a focused aspect of teaching and guide the development of the shared goal and subsequent action plan (Snyder & Wolfe, 2008).

Performance-based feedback. PBC coaches collect data that are focused on the teacher’s implementation of the identified teaching practices and how children are responding to the teacher’s practices, when they observe in the teacher’s classroom.

This narrow goal-oriented focus makes the coach’s role transparent and primes the teacher to receive performance-based feedback helping her or him to identify what they are doing well and how they can enhance their teaching to be better aligned with the

25

evidence-based teaching practices that are the focus of coaching (Snyder, Hemmeter,

& Fox, 2015). This is referred to as supportive and constructive performance-feedback.

This type of feedback is well aligned to principles of adult learning theory (National

Research Council, 2000) and applied behavior analysis (e.g., Bijou, 1993) and has an evidence-base demonstrating that it is likely a critical “active ingredient” contributing to the efficacy of PBC (Casey & McWilliam, 2011; Kretlow & Bartholomew, 2010; Snyder,

Hemmeter, & Fox, 2015; Thurlings, 2013).

Transactional collaborative partnership. The collaborative partnership is dyadic and transactional. It is developed over time and is shaped and maintained by the interactions, knowledge, skills, dispositions, and motivations of both the coach and the teacher. The coach and teacher may develop a relationship overtime (Shannon,

Snyder, & McLaughlin, 2015); but, the partnership is characterized from the onset by both relational coaching practices (e.g., respect for individual learning histories, beliefs, and culture) and participatory coaching practices (e.g., active listening, questioning, shared goals; Dunst & Trivette, 2009). It is the use of these coaching practices that facilitate the transactional nature of the collaborative partnership. Coaching practices in combination with the operationally defined PBC components, provide structures and processes that becomes routine. Coaching practices allow the coach to create intentional and systematic opportunities for the teacher to gain fluency in contributing to the coaching debrief meeting. These transactional exchanges increase the coach’s capacity to make accommodations and modifications to the coaching process and teaching practices that are the focus of coaching, and to better to meet the needs of the

26

individual teacher, his or her context, and the children who are learning and developing under his or her care (Sameroff, 2009).

Supporting pre-school teachers to gain confidence and competence with implementing EBPs with fidelity is a primary goal of PBC. When the teachers achieve this level of fluency they are more likely to experiment with generalization of teaching practices to new children and novel situations, increasing the likelihood the teaching practices that are the focus of PBC will be sustained over time. The theoretical and empirical data have already demonstrated PBC is effective for supporting positive teacher and child outcomes related to select teaching practices. Additional research is needed however, to better understand which of these active ingredients, at what intensity, and in which combination, are efficacious for increasing the efficiency with which PBC is applied and the likelihood that desired outcomes are reliably achieved.

Measuring Fidelity of Implementation of PBC

The operationally defined components of PBC are a mechanism for guiding coaches’ consistent implementation of evidence-informed coaching practices, which have demonstrated the capacity to enhance and support the development of teacher practice when implemented with fidelity (Snyder, Hemmeter, & Fox, 2015). Fidelity of coaching implementation in PBC is measured through coach self-report data recorded on project developed coaching logs and the ratings of trained project staff or “raters”.

Fidelity data provide information about the degree to which coaching was implemented as intended and to what extent observed outcomes can be reliably associated with coaching activities (Artman-Meeker et al., 2015; Isner et al., 2011; Snyder et al., 2011,

Snyder et al., 2015). Furthermore, qualitative social-validity data regarding the teachers’ experience of participating in a PBC collaborative partnership illustrates how each

27

component of the PBC framework was identified as an essential part of the coaching experience from acquisition, to fluency, to sustained implementation of effective teaching practices in preschool classrooms (Shannon et al., 2015).

Teacher and Child Outcomes Associated with PBC

Practice-based coaching is designed to promote teachers’ implementation of effective teaching practices, which, in turn should positively impact the early experiences and developmental trajectories of young children. PD studies implementing

PBC have demonstrated noteworthy effects on teachers’ implementation of practices, which were the focus of coaching, across a variety of delivery formats (e.g., expert, self- coaching, peer-coaching, onsite, web-mediated) and in diverse early childhood settings

(e.g., Head Start, public and private preschool, and early childhood special education).

In addition, PBC has been successfully adapted to support the implementation of diverse evidence-informed teaching practices including (a) Embedded Instruction for

Early Learning (e.g., Snyder, Hemmeter, McLean, Sandall, McLaughlin, & Algina), (b)

The Pyramid Model for Promoting Social-Emotional Competence in Young Children

(e.g., Hemmeter et al., 2016), (c) BEST in CLASS, a set of 10 teaching practices focused on preventing young children’s challenging behavior to increase their engagement and learning (e.g., Conroy et al., 2015), and (d) emergent language and literacy practices to increase children’s communication and pre-reading skills (e.g.,

Greenwood et al., 2017). Some studies using PBC have also begun to identify positive outcomes for children enrolled in classrooms where teachers have received coaching

(e.g., Conroy et al., 2015; Greenwood et al., 2017; Hemmeter et al., 2016).

28

Embedded Instruction Practices and PBC

Developmentally appropriate practices in early childhood education (National

Association for the Education of Young Children, 2009) and recommended practices in early intervention/early childhood special education (Division for Early Childhood, 2014) promote the inclusion of young children with or at risk for disabilities in environments where they can access and participate in early childhood environments with peers without disabilities or delays. Embedded instruction is an evidence-based approach for supporting the development and learning of young children with or at risk for disabilities in inclusive environments. It involves providing planned, systematic, and intentional instruction within the ongoing activities and routines of high-quality early learning environments (Snyder et al., 2013).

Embedded instruction teaching practices were the focus of PBC in the present study. Preschool teachers enrolled in a larger efficacy trial from which data for the present study were obtained were introduce to the 14 key embedded instruction practices organized under four components (Snyder, Algina, Hemmeter, & McLean,

2015). A brief description of each components is provided below (Snyder et al., 2013).

 “What to Teach” practices support teachers to identify target behaviors or skills, which build on children’s strengths and are aligned with the general preschool curriculum and the child’s individualized education program (IEP). These target behaviors or skills are designed to increase access to and participation in classroom activities.

 “When to Teach” practices include the identification of natural and logical times of day to provide embedded learning opportunities for the child to practice the target behaviors or skills, with sufficient intensity to make measurable growth.

 “How to Teach” practices guide the selection of systematic instructional strategies that support the child to demonstrate the behaviors or skills under supportive conditions during meaningful classroom activities.

29

 “How to Evaluate” practices foster the teachers’ ability to evaluate their implementation of embedded instruction and the child’s progress, to make data- informed decisions about instruction.

Purpose Statement

The purpose of the present study was to systematically describe select aspects of the process dimension of coach-teacher interactions during the coaching debrief component of on-site, expert-coach facilitated PBC collaborative partnerships. The

Coaching Practices Observation Tool research version 1.0 (CPOT-RVI, Shannon &

Snyder, 2016), a direct behavioral observation system, was designed for the present study. The CPOT-RVI quantifies what is occurring during the coaching debrief meeting including (a) the proportion of time allocated to different conversational foci, including who initiated the conversation focus; and (b) the type of verbal behavior used by coaches to support teachers’ active participation in planning for, implementing, and evaluating the teaching practices that are the focus of coaching.

Data used to conduct the present study were collected within year one of an

Institute of Education Sciences (IES) Goal 3 efficacy trial of Tools for Teachers (TfT), a

PD intervention designed to support classroom teachers’ use of embedded instruction with preschool children with disabilities (Snyder, Algina et al., 2015). The PD intervention package has the following components: (a) four integrated high-quality workshops, (b) implementation guides and materials, (c) a comprehensive web-based multimedia tool kit, and (d) either expert onsite coaching or self-coaching via a website.

Onsite coaching was the focus of the present study. Data sources for the present study were 21 video-recorded coaching debrief meetings representing seven coach-teacher dyads sampled on three occasions across 15 weeks of on-site coaching. The videos were used to address the primary aims of this study—to describe how coaches facilitate

30

PBC debrief conversations with preschool teachers by quantifying duration of conversation foci, including coach versus teacher initiations, coach verbal behavior, and whether changes are observed across time related to these variables. The following research questions guided the present study:

1. What proportion of the coaching debrief time was spent in each conversation focus across all coaching debrief meetings?

1a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus?

2. Did the proportion of coaching debrief time, spent in each conversation focus, change over time?

2a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus over time?

3. Of the total number of conversation initiations, what proportion were initiated by the coach versus the teacher?

3a. What proportion of each conversation foci was initiated by the coach versus the teacher?

3b. Did differences exist between dyads in the proportion of conversation foci initiations by the coach versus the teacher?

4. Did the proportion of coach versus teacher initiations change over time?

4a. Did differences exist between dyads in the proportion of conversation foci initiations over time?

5. At what rate was each coach verbal behavior used across all coaching debrief meetings?

5a. Did differences exist between coaches in the rate of verbal behaviors used?

6. At what rate were coach verbal behaviors used within each conversation foci?

6a. Did differences exist between coaches in the rate of verbal behaviors used within each conversation foci?

7. Did the rate of coach verbal behavior change over time?

7a. Did differences exist between coaches in the rate of verbal behaviors used over time?

31

8. What were the demographic characteristics of each dyad and were there differences in the coaching implementation, proportion of time spent in each conversation foci, coach and teacher initiations, and rate of coach verbal behavior relative to the overall sample?

Significance

Coaching is a core competency driver in the installation and sustained use of innovations and EBPs known to improve teacher and child outcomes (Halle et al.,

2013). Yet, much remains unknown about the PD (Sheridan et al., 2009, Snyder et al.,

2011) and coaching (Powell & Diamond, 2013b) processes that facilitate teacher learning about and implementation of the EBPs. Research is needed that focuses on factors like such as content, form, and dose of PD or coaching and how these factors interact with the characteristics of individual learns to influence the extent to which innovations and EBP are used in the classroom. First generation PD studies have helped to define and describe the features of PD that are efficacious. Second generation studies are needed to disentangle the “active ingredients” of multi- component PD approaches and to identify what works, for whom, and under what conditions (Snyder et al., 2011).

One way to advance the science and application of coach-based professional development is to improve the documentation and reporting practices associated with independent variables and learner characteristics (Artman-Meeker et al., 2015; Isner et al., 2011; Schachter, 2015; Sheridan et al., 2009; Snyder et al., 2011, Snyder et al.,

2012) so that relationships can be explored between these variables. The present study applies a finer grained analysis to one aspect of TfT a multi-component PD innovation, that includes PBC. Building on the work of Snyder, Hemmeter, and Fox (2015), the

32

present study explored the coach-teacher interactions within PBC debrief meetings using a video-based observational coding system.

The Coaching Practices Observation Tool-Research Version 1.0 (CPOT-RV1;

Shannon & Snyder, 2016) was developed, piloted, and used in the present study. The development of the CPOT-RVI was significantly enhanced by the clearly articulated and operationally defined components of the PBC framework and the established theoretical and empirical foundation of PBC. These features of PBC justified the development of the CPOT-RVI and the present study, because PBC has demonstrated efficacy for supporting positive teacher and child outcomes on average, across several teaching practices, content domains, and instructional contexts. There is a significant need, however, to explore the parameters of adaptation and individualization that the PBC framework can tolerate while maintaining promising outcomes (Hulleman, Rimm-

Kaufman, & Abry, 2013). Developing a system to further define and describe what aspects of the PBC framework and coaching practices are exhibited by coaches, for how long, and in what combinations is an initial step towards better understanding how coaches modify and adapt the PBC structure in a transactional and responsive ways to support individual teacher learning.

Furthermore, exploring PBC coaching practices has the potential to enhance the preparation and ongoing systems of support provided to coaches by professional development providers and program leaders. Most fidelity of implementation data for

PBC reported to date has focused on adherence to the PBC framework and coaching protocol, not the quality of implementation. Performance-based feedback for coaches

33

about both the adherence and quality of coaching practices implementation is needed.

The present study will help address this need.

Delimitations

The present study was conducted using an existing data set. These data focus on the use of on-site expert coaches’ implementation of PBC, using project-developed protocols, to facilitate public preschool teachers’ knowledge and implementation of embedded instruction practices with children ages 3-5 years with disabilities.

The present study involved coaches who were affiliated with university-based research teams at two implementation sites. Although coaches were diverse in their professional experiences, project staff trained all coaches on the PBC framework and the associated coaching manual, protocols, and materials. The present study does not examine the use of different coaching frameworks or models. Furthermore, different delivery formats (i.e., on-site vs. web-mediated, expert vs. self) are examined within the larger efficacy trial; however, only the onsite expert coach data were included in the present study.

The PBC framework is composed of interrelated components and the present study focuses primarily on the coaching debrief meeting where action planning, goal setting, and reflection and feedback are observed. The focused observation influences the debrief interactions, but is not directly included in the present study. Furthermore, the present study uses a single analytic technique to explore the coaching debrief meeting. There are several approaches to defining and describing the dyadic transactional exchanges between coaches and teachers and the scope of this study is intentionally focused on the use of direct behavioral observation. Furthermore, the observation codes emphasize positive coaching practices and were not designed to

34

provide information about practices not aligned with the theoretical foundations of the

PBC framework.

Limitations

The present study has several limitations. First, the study data were collected within a larger efficacy trial, and the sample of coaches who provided expert onsite coaching was limited to university-based, project trained coaches. It must also be noted that the teachers volunteered to participate in the study, representing a degree of interest and motivation in being coached. The findings of the present study represent a small sample of teachers who met specific inclusion criteria (a) taught in the public- school system, (b) held a state teaching certificate, (c) worked with children ages 3-5,

(d) engaged in coaching around embedded instruction practices, and (e) volunteered for the study. Finding from this sample of teachers might not generalize to teachers in diverse early learning settings, particularly locations where teaching certification and credentials may be less rigorous.

Only three randomly selected videos were observed and coded for each teacher, which may over or underestimate what actually occurred across the 15-weeks of coaching. Furthermore, the practices which were the focus of coaching (i.e., embedded instruction) could influence the observed conversational foci and use of coach verbal behaviors and materials. Given the emergent state of the CPOT-RVI and studies exploring the process dimension of coaching, associations between practices which are the focus of coaching, and the use of various coach behaviors and conversation foci have not yet been established.

Finally, the CPOT-RVI was designed to align with aspects of the PBC framework.

This alignment between the PBC framework and the practices captured by the CPOT-

35

RVI increases the likelihood coaches will engage in the specified behaviors. It is likely the CPOT-RVI would have some degree of application to any dyadic coaching conversation (e.g., use of clarifying questions, personal pleasantries), but it may miss what are considered to be important aspects of other coaching frameworks or models.

For example, it would not capture the use of vision-based coaching for leadership, which is grounded in intentional change theory. In this approach, the coachee focuses on articulating their “deepest aspirations, hopes, and dreams for the future, as well as positive aspects of one’s core identity” (Passarelli, 2015, p.1), rather than focusing on short term goals and action steps toward those goals that are part of PBC.

Summary

High quality early experiences, which in turn, lead to optimal developmental outcomes for young children, have been identified as an important outcome of teacher

PD. Enhanced understanding of the process dimension of coach-based PD could (a) inform policy and program decisions regarding efficient and effective approaches to developing and installing an adaptive system of PD; (b) strengthen the association between intervention components and child and teacher outcomes; and (c) inform the ongoing training and technical assistance provided to coaches who work with preschool teachers. The present study explored how coaches facilitate conversations with preschool teachers within a PBC partnership. The emphasis of the exploration was the conversational focus, including who initiated the conversation foci, and the coaches’ use of different verbal behaviors.

36

Figure 1-1. Components of the practice-based coaching framework (Snyder, Hemmeter, & Fox, 2015).

37

CHAPTER 2 REVIEW OF THE LITERATURE

The purpose of the present study was to explore select aspects of the process dimension of coaching by exploring how coaches facilitated conversations during the coaching debrief meeting of an on-site Practice-based Coaching (PBC) partnership. A direct behavioral observation system was developed to investigate (a) the proportion of time allocated to different conversational foci, including who initiated the conversation focus; and (b) a description of the verbal behaviors used by coaches to support teachers’ active participation in planning for, implementing, and evaluating the embedded instruction practices presented through coaching. A review of the literature is presented in this chapter. It addresses (a) features of quality in early childhood professional development (PD), (b) systematic follow-up support as a component of early childhood PD, (c) empirical evidence demonstrating the efficacy of the PBC framework to facilitate teacher learning and child outcomes, and (d) extant literature applying observational coding systems to measure coach behaviors within collaborative coaching partnerships. These topics provide a complementary rationale for the present study.

First, the extant PD literature is reviewed, with particular emphasis on the use of systematic follow-up supports (e.g., coaching, mentoring, consultation) to aid teachers’ use of new or refined skills when interacting with young children. Second, characteristics of empirical studies examining the relationships between teachers’ participation in PBC with (a) fidelity of practice implementation in the classroom and (b) child outcomes are described to illustrate what is known and unknown about the content, structure, and process dimensions of this coaching framework. Third, existing

38

observational coding systems, that record coach-coachee verbal behavior, facilitation strategies and practices, and, interactions are discussed.

Search Procedures

Literature reviewed in this chapter was identified using the Early Childhood

Professional Development Database (ECPDD) housed in the Anita Zucker Center for

Excellence in Early Childhood Studies. The ECPDD is updated annually through a systematic search procedure, which is informed by the Preferred Reporting Items for

Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher, Liberati, Tetzlaff,

& Altman, 2009). The student investigator has participated in the development and maintenance of the ECPDD since 2013.

First, the following electronic databases were used: Education Full Text, ERIC,

PsychINFO, and the Social Sciences Citation Index. Key words included natural language and controlled vocabulary related to PD delivery formats (e.g., workshop, pre- service, coaching), filters included terms related to early childhood and early childhood age and settings (e.g., infant, preschool, Head Start). Limiters were peer-reviewed journals, settings serving birth-to-five, and studies published in English. Studies obtained were exported to reference management software where duplicates were removed. All remaining articles were reviewed by title and abstract, and full text when applicable, to determine if they met the inclusion criteria. Inclusion criteria specified the study (a) included a form of teacher PD, (b) represented children age birth to five, (c) included empirical data, and (d) occurred in the United States.

39

The identification process yielded 364 studies2 published prior to December 31,

2015. These articles were coded for 77 indicators across 10 areas (location, setting, child age, PD focus, PD content, PD model, PD strategies, follow-up strategies, research design, and independent variables) using the operational definitions developed by Snyder et al. (2012). Two hundred and twenty-seven (62%) of the 364 studies included systematic follow-up in the form of (a) coaching/performance feedback, (b) consultation, (c) mentoring, (d) peer support group, (e) communities of practice/shared inquiry. The 227 systematic follow-up articles were coded for 122 additional indicators across 13 areas (follow-up agent, agent qualifications, follow-up recipient, targeted of follow-up support, follow-up frequency, duration of coaching, duration of feedback session, follow-up agent strategies, feedback format, focus of follow-up, follow-up protocol, follow-up fidelity, and outcomes measured, follow-up agent’s training and support within the study). Relevant reviews and book chapters addressing PD, PBC, and observational coding systems in the early childhood context were also identified.

Early Childhood Professional Development

The education of young children is garnering unprecedented economic, policy, and research attention in the United States. Access to publicly funded preschool education services through statewide universal and voluntary pre-kindergarten and the concurrent expansion of Early Head Start slots reflect the changing national discourse around the education of young children (Barnett et al., 2017); one which emphasizes the potential to shift children’s developmental trajectories in powerful ways during the

2 Two of the articles included two studies and one of the articles reported five studies three of which met the criteria of this review, so the 364 studies represent 360 unique articles.

40

first five years of life (Shonkoff, 2010). Similarly, a number of national policy initiatives have begun to address the important work of developing cross-sector training and technical assistance and PD systems, which ensure teachers who work with young children have the capacity to provide high-quality education and care (NPDCI, 2008).

Contemporary policy initiatives, which advocate for and allocate capital toward a more cohesive system of support for early childhood teachers include: The Race to the Top

Early Learning Challenge; State Advisory Councils for Care within the Head Start reauthorization Act; the Individuals with Disabilities Education Act, provisions for knowledge and skill development of Part B/619 and Part C providers; and the option to use Title I funds to support preschool initiatives under the Every Student Succeeds Act.

Opportunities and challenges for research in the early childhood sector are a product of this social policy attention. Dedicated funding from the National Institute of

Child Health and Human Development (NICHD) and the Institute of Education Sciences

(IES) National Center for Education Research and National Center for Special

Education Research have supported the development of promising interventions which, when implemented with fidelity, are associated with important social and academic outcomes for young children. Many of these interventions have also explored the role of the classroom teacher in mediating child outcomes (Diamond et al., 2013). NICHD’s child development and behavior branch has program areas specifically focused on (a) promising interventions for children ages 3-5 years and (b) establishing causal linkages between teacher behavior and child outcomes. IES funded research studies have shown (a) PD in early childhood classrooms can improve the global classroom quality and the implementation of instructional practices, (b) enhanced teacher practice can

41

impact child learning, and (c) PD can be delivered using a variety of onsite and distance formats (Diamond et al., 2013). Grant-funded research in early childhood has also illustrated that although there are some common features of effective PD, the early childhood context is unique from K-12 and it is important to tailor PD supports to meet these unique needs (NPDCI, 2008; Zaslow et al., 2010, Snyder et al., 2011).

Defining Professional Development

For the purpose of the present study, PD is defined as “facilitated teaching and learning experiences designed to enhance practitioners’ knowledge, skills, and dispositions as well as their capacity to provide high quality early learning experiences for young children” (Snyder et al., 2012, p.188). Professional development experiences occur along the continuum of an early childhood teachers’ career and often include a variety of delivery formats. For most early childhood teachers, PD is experienced as a combination of (a) formal credentials, certifications, and degrees; (b) PD events such as conferences, workshops, seminars, in-services, and trainings; (c) embedded learning opportunities, which involve context-based learning such as mentoring, coaching, consultation, and action research; and (d) informal interactions with colleagues or literature. Contemporary theory and research has demonstrated, however, that finding the “fit” between the PD experience and the desired outcomes of that experience on teachers’ individual knowledge, skills, and dispositions is critical (Harris, 1980;

McCollum & Catlett, 1997; NPDCI, 2008; Shannon et al., 2015; Snyder et al., 2011;

Snyder & Wolfe, 2008).

42

Aligning Professional Development Inputs and Outcomes

The NPDCI (2008) conceptual framework for PD in early childhood was developed to facilitate consensus building around how to define and enact PD. It includes three key components:

(a) the characteristics and contexts of the learners (i.e., the “who” of professional development, including the characteristics and contexts of the learners and the children and families they serve); (b) content (i.e., the “what” of professional development; what professionals should know and be able to do; generally defined by professional competencies, standards, and credentials); and (c) the organization and facilitation of learning experiences (i.e., the “how” of professional development; the approaches, models, or methods used to support self-directed, experientially-oriented learning that is highly relevant to practice.)(p.3).

In addition, programs should consider the intended outcomes of the PD (i.e., the “why” of PD) to make decisions about who should attend, what content can and should be covered, and how learners will engage with the content.

These components in combination with local contexts provide a guide for individuals and programs who plan for, implement, and evaluate early childhood PD to develop experiences that “fit” an established learner or program need. Two studies, which exemplify finding the alignment “fit,” are presented here as an example. Anderson

(2014) reported that informational content and interactive videos within a 2-hour webinar

(how) about environmentally sound integrated pest management in early learning environments (what) was effective for increasing childcare administrators’ (who) knowledge of safe and effective pest-management (why). Furthermore, most participants found the delivery format to be cost efficient and of sufficient intensity to increase knowledge and compliance with their local policy context. In contrast,

Clements, Sarama, Wolfe, and Spitler (2015) reported on a study that provided preschool teachers (who) with 13 days of workshops and bi-weekly coaching over a

43

period of 2 years (how) to support the installation and sustained fidelity of implementation (why) of an early childhood math curriculum (what). The intensity of this support was, the authors argue, important for enhancing the way teachers thought about and presented math content, and produced high levels of sustained fidelity of implementation 2-years post intervention. Anderson (2014) and Clements et al. (2015) depict stark contrasts, illustrating an empirical application of how the NPDCI framework components can support the alignment of specified outcomes or the “why” of PD (i.e., knowledge vs. implementation) with the intensity or “how” of PD (i.e., 2-hour webinar vs.

2-years of workshops and coaching) to meet the needs of individual learners and programs. By considering each of the NPDCI components individually and in combination, program leaders can better support early childhood teachers and staff to increase the efficiency and effectiveness of PD.

Features of Quality Professional Development

In addition to aligning PD inputs and outcomes, there is a corpus of empirical research to guide providers and programs in selecting PD features which represent the extant knowledge on how people learn (Donovan, Bransford, & Pelligrinol,1999; Trivette et al., 2009) and findings from studies on what makes PD effective (Darling-Hammond

& Richardson, 2009; Garet, Porter, Desimone, Birman, & Yoon, 2001; Guskey, 2003;

Joyce & Showers, 2002; McCollum & Catlett, 1997; Snyder & Wolfe, 2008; Snyder et al., 2011, Zaslow, 2009). Many of the features can be applied across a number of PD delivery formats and experiences. The features include(a) a content focus with clear objectives; (b) integrated theory and practice; (c) opportunities for active engagement

(i.e., discussion, modeling, demonstration, role-play) and feedback on implementation;

(d) alignment with program goals; (e) matched intensity and desired outcomes; (f) the

44

provision of a framework for self-evaluation, and, when appropriate; (g) follow-up support to apply practices in context. Researchers who seek to advance and clarify knowledge regarding these PD features are now investigating what combination of these features is required, and if intensity (i.e., frequency, duration) and delivery format

(live vs. distance, group vs. individual, expert vs. peer vs. self), or individual characteristics mediate or moderate the efficiency and effectiveness of these features

(Isner et al, 2011; Snyder et al., 2011; Snyder et al., 2012; Zaslow et al, 2010).

Trends in Early Childhood Professional Development Research

The extant early childhood PD literature provides insight into who is currently being served, what content is being addressed, and how PD is being delivered. Across the four databases searched, 364 articles addressing early childhood (birth to five) PD in the United States since 1970 were identified. Nearly a third (29%) of these articles were published between February 2011 and December 31, 2015. These data further illustrate the exponential growth in the nation’s investment in early childhood and the increased availability of PD to teachers who work with young children— a traditionally underserved workforce (LeMoine, 2015). Across the identified articles, the majority of teachers participating in PD were located in preschool (40%) or Head Start (44%) and worked primarily with children age 3-5 years (89%), while fewer worked with infants and toddlers birth to 24 months (30%). Inservice/offsite (34%) and staff development/onsite

(28%) workshops are the most prevalent PD delivery format. The content of available

PD was most often focused on the social-emotional (45%), literacy (34%), and language development (38%) domains.

45

Use of systematic follow-up support

Systematic follow-up strategies can support teachers to translate new or refined knowledge and skills gained through PD experiences into their classroom context.

Follow-up strategies range in intensity (Snyder & Wolfe, 2008). Low intensity follow-up support includes the provision of reading materials or websites to review at home or job- aides like visual cues that can be used in the classroom. More intensive and systematic follow-up support can take a number of forms including coaching, mentoring, consultation, communities of practice, and peer support groups (Snyder et al., 2012).

The majority of articles (62%, n = 227) included in the review described the use of one or more systematic follow-up strategies. Of the studies that reported the use of systematic follow-up strategies, coaching was the most prevalent (82%, n = 186), followed by consultation (10%, n = 23). Mentoring, communities of practice, and peer support groups were less prevalent in the empirical literature.

Trends in studies classified as coaching

Eighty-two percent (n = 186) of the 227 studies that included systematic follow-up support were classified as involving “coaching” as a form of PD. This section provides additional information about the characteristics of the who, what, why and how for the studies that were classified as involving coaching.

The “who”. The majority of the recipients of the coaching support were lead teachers (72%). Most recipients worked in classrooms classified as Head Start (44%), preschool (42%), and childcare (26%). Ninety percent of the programs served preschool-aged children while only 29% served infants and toddlers. Nearly half of the programs served children with identified disabilities (46%) and children who were at-risk

46

for adverse childhood experiences (41%). Additional information about the recipients of coaching and other forms of systematic follow-up supports can be found in Table 2-1.

The majority of the coaches were associated with research projects (54%), and some held a master’s degree (22%) and had teaching experience (34%). Twenty-three percent of coaches received training and support to be a coach and 17% had training on the content that was the focus of coaching. Twenty-five percent of the coaches participated in delivering training experiences for the recipients beyond coaching (e.g., workshops, webinar). Table 2-2 shows additional information about the people responsible for delivering the coaching and other forms of systematic follow-up support.

The “what”. Similar to the broader PD literature the majority of the coaching studies had a social-emotional (44%), language development (43%), and literacy (37%) content foci. Typically coaching was focused on supporting teachers to implement a curriculum (23%) or group of practices (20%) and it was the only intensive form of systematic follow-up support used to address a single practice (6%). Thirty-two of the coaching studies were to support the teacher to implement class-wide teaching practices, but coaching was also used to support the teacher to work with one child

(11%). More information about the content focus of the systematic follow-up support can be found in Table 2-3.

The “how”. Many of the PD studies did not adequately report information about how systematic follow-up support was provided. Eighty-two percent of the studies did not report the duration of the coaching collaboration; however, available data indicate coaching can occur for as little as 1 week or up to several years. Coaching sessions

47

most often occurred weekly (31%) and monthly (24%). In the few studies that reported the length of coaching sessions, 29% lasted more than 30 minutes.

Seventy-five percent of the coaching studies did not report the use of coaching guides or implementation supports, but when these tools were reported they typically included manuals (11%) and scripts or protocols (8%). Eighty-one percent of coaching studies did not provide a measure of coaching implementation fidelity; however, 8% of the studies reported the use of a checklist and 5% reported the use of a project developed fidelity measure. More information about the coaching and other forms of systematic follow-up support provided is shown in Table 2-4.

Studies with PBC components

Eighty-two percent (n = 186) of the 227 studies that included follow-up support were classified as coaching and 29% (n = 54) of the coaching studies were classified as focusing on a set of interactional or instructional teaching practices. This section provides additional information about the studies which were classified as coaching around teaching practice that included (a) components of PBC (Snyder, Hemmeter, &

Fox, 2015) or (b) coaching strategies described in the PBC coaching manuals (Snyder,

Hemmeter, Bishop, Shannon, & McLean, 2015). The majority of studies that were classified as coaching on a set of teaching practices reported the use of verbal performance feedback (70%) and observation (65%).

Strengths and needs assessment is consider to be a required component of implementing PBC with fidelity. Nineteen percent of all coaching studies reported the use of a strengths and needs assessment, but only 6% of the coaching studies that also focused on teaching practices used a strengths and needs assessment. In contrast,

19% of the coaching studies that reported the use of coaching to support teacher’s use

48

of teaching practices reported the use of goal setting and graphic performance-based feedback (13%), a higher percentage than all the coaching studies reviewed (3% and

8% respectively). Action Planning was under-utilized among both groups at 3-4%.

These data suggest that although coaching in some studies is focused explicitly on the implementation of teaching practices, the PBC framework provides a systematic and intentional way to integrate and consistently implement effective coaching practices like those described in Table 2-5 to support teachers acquisition of those teaching practices.

Reporting limitations in systematic follow-up implementation support

Despite the use of a systematic procedure for locating and reviewing articles, much remains unknown about the dose of follow-up support and what occurs between participants during follow-up support because this information is underreported. Of the coaching studies reviewed, 82% did not report information about the duration of systematic follow-up, 29% did not report the frequency, and 54% did not report the length of follow-up sessions. In addition, these studies frequently did not report information about who the follow-up agent or interventionist was, including qualifications and training for his or her role (46%). Furthermore, only 25% of the studies reported the use of materials such as a manual or guide to inform the delivery of the systematic follow-up intervention, while only 19% reported conducting fidelity checks to ensure the systematic follow-up was delivered as intended.

Recommendations for enhanced reporting

The Zaslow et al. (2009) review of early childhood PD and Isner et al. (2011) and

Artman-Meeker et al., (2015) reviews of early childhood coaching provide recommendations for reporting PD activities and systematic follow-up, which explicitly address the reporting limitations described above. Their recommendations emphasize

49

the need to describe the dose, follow-up strategies, and intervention or follow-up agent characteristics in detail. Although not yet prevalent, it appears these recommendations are making their way into the early childhood PD research community. Abell, Ariswalla,

Putnam, and Miller (2014) explicitly references these recommendations stating,

By explicitly describing the details of the Family Childcare Partnerships (FCCP) professional development model, it’s underlying activities, and the processes FCCP follows to assist providers to improve their caregiving quality, we offer information others may find useful for the development and testing of practice-based professional development initiatives. (p. 588).

Increased specificity regarding the “who” (i.e., a description of personnel delivering PD and how they are trained/supported) and “how” (i.e., structure and processes including delivery format and dose follow-up support) is more prevalent in contemporary research

(e.g., Landry et al., 2011; Pianta et al., 2014; Wasik & Hindman, 2011; Zan, 2014).

Summary of Early Childhood Professional Development

The number of rigorous empirical PD studies being conducted in early childhood is growing, but the quality and efficacy of that PD warrants additional research.

Advancement of the science of early childhood PD cannot be achieved in the absence of a commitment by researchers to provide complete and accurate documentation of the

PD features, including the content, structure, and processes, which led to the reported teacher and child outcomes (Sheridan et al., 2009; Snyder et al., 2011; Snyder et al.,

2012; Zaslow et al., 2010).

Implementing PBC in Early Childhood Contexts

PBC is one form of coach-based professional development designed to enhance the confidence and competence of early childhood teachers to implement evidence- based teaching practices (Snyder et al., 2015). It is one form of systematic, follow-up

50

implementation support. PBC is often delivered in combination with workshops or other

PD experiences, and can occur in a variety of delivery formats (e.g., live or web- mediated; expert, peer, or self-facilitated). This section will present the theoretical and empirical support for the components of the PBC framework followed by studies demonstrating the efficacy of this approach.

Evidence for the PBC Components

PBC is a cyclical process which includes (a) strengths and needs assessments that inform shared goals and action planning, (b) focused observation, and (c) reflection and feedback. The components occur within the context of a collaborative partnership and focus on a set of interrelated teaching practices (Figure 1-1).

Teaching practices

Teaching practices represent observable and measureable actions or behaviors teachers can use in the classroom to facilitate child learning. Isner et al.’s (2011) review of 44 early childhood coach-based professional development studies found 31 of the studies reported outcomes related to observable quality or teaching and interactional practices, while 21 of these studies linked teacher practices to child development and behavior outcomes. Implementation science frameworks suggest coaching is a key implementation driver for translating evidence-based interventions, innovations, and teaching practices into real world contexts (Fixsen, Naoom, Blasé, Friedman, &

Wallace, 2005; Martinez-Beck, 2013). All applications of the PBC framework occur around a set of identified teaching practices. In the present study, the identified teaching practices are related to embedded instruction.

51

Collaborative partnership

PBC is transactional and requires both the coach and teacher to contribute time, effort, and expertise. The collaborative partnership is grounded in a shared goal focused on practice implementation and mutually agreed upon expectations regarding the role of the coach and teacher. Each partnership is individualized and continually evolves to meet the motivations, preferences, strengths, and needs of each teacher. The dyadic collaborative partnership is the primary focus of early coaching interactions and continues to develop throughout the coaching cycle. The dyadic transactional collaborative partnership is the context in which all other components of the PBC framework are implemented (Snyder et al., 2015)

Strengths and needs assessment to inform shared goal setting and action planning

The coach uses a strengths and needs assessment specifying the observable and measureable teaching practices, which are the focus of PBC, to elicit the teacher’s perspectives and priorities for the coaching partnership (Snyder & Wolfe, 2008). The strengths and needs assessment informs the initial goal development and can provide ongoing insight for the coach and the teacher regarding the teachers’ goals throughout the coaching cycle. Identified goals include one or more teaching practices, which the coach and teacher have mutually agreed upon as the focus for coaching over the next

2-5 weeks. Goals are recorded on an action plan, which specifies the materials and resources needed to achieve the goal, a timeline, and a criterion for the coach and teacher to know when the goal has been achieved. The strengths and needs assessment, goal, and action plan provide a “road map” for how the coach will support the teacher to achieve a goal he or she has identified as important for increasing

52

confidence and competence with the use of evidence-based or evidence-informed practices to support child outcomes.

Focused observation

Observations are “focused” when the coach records information related to the teacher’s identified goal and action steps. Focused observations facilitate the coaching partnership by increasing transparency around the coach’s role in the classroom. The observation promotes fidelity of implementation by providing a setting event for the teacher to use a teaching practice (Kretlow & Bartholomew, 2010). In addition, the focused observation may also include participatory or on-the-spot coaching support aligned to the teacher’s preferences. This support is designed to enhance or refine the teacher’s implementation of practices through coaching strategies (e.g., modeling, verbal or gestural cues, feedback; Snyder, Hemmeter, Bishop et al., 2015). In situ demonstrations and interactions promote knowledge, and skill acquisition and fluency, through immediate feedback, which lets teachers know when, where, how, and with whom to use (or not use) particular teaching practices (Metz, Halle, Bartley, & Blasberg,

2013).

Reflection and feedback

Reflection and feedback occurs within a post-observation coaching debrief meeting. Reflective conversations prompt the teacher to think about his or her implementation of the teaching practices, identify strengths, solve problems, ask questions, and plan for future implementation. Feedback within the PBC framework is performance-based and guided by the teacher’s current and former goals and action steps. Performance feedback is designed to provide the teacher with specific verbal, written, graphic, or video-based information about his or her implementation of identified

53

teaching practices; it is both supportive and constructive (Snyder, Hemmeter, & Fox,

2015). Repeated performance-feedback opportunities related to a specified goal(s) can also support the development of error detection skills, which build the capacity to provide self-feedback aimed at reaching and sustaining the desired behavior (Hattie &

Timperley, 2007; Kluger & DeNisis, 1996; Thurlings, Vermeulen, Bastiaens, & Stijnen,

2013). Teachers who have participated in PBC report performance feedback is useful and acceptable (Shannon et al., 2015).

Intervention Studies Implementing Practice-based Coaching

A number of studies have been identified from the ECPDD, which explicitly describe the use of PBC or the PBC components. These studies are summarized in

Table 2-6. The identified studies illustrate the efficacy of PBC for supporting teachers’ implementation of teaching practices designed to (a) increase global classroom quality and (b) facilitate the development of children’s social-emotional, language and literacy, pre-academic skills, and reduced challenging behavior through positive behavior intervention supports.

Teacher outcomes

The studies summarized in Table 2-1 include studies conducted with early childhood practitioners that explicitly indicated the use of PBC and those that included the PBC components (i.e., teaching practices, strengths and needs assessment, goal setting and action planning, focused observation, reflection, and performance-based feedback). The teacher and child outcomes shown in Table 2-1 suggest most teachers successfully acquired, gained fluency with, and maintained targeted teaching practices through participation in onsite or blended (i.e., onsite and web-mediated) delivery formats when PBC or the components of the PBC framework were used. Statistically

54

significant differences in teachers’ classroom implementation of identified teaching practices, when compared to the control condition, were reported in all eight of the randomized controlled trials; significant effects were reported in three of four pre-post designs, and implementation of teaching practices above baseline were reported in four of the five single-subject studies.

The web-mediated delivery format, however, showed greater variability in teacher outcomes. For example, Artman-Meeker et al. (2014) found participation in 12- weeks of web-mediated coaching did not increase teachers’ implementation of Pyramid

Model practices above those who participated in workshops alone. This finding suggests web-mediated support might not be the most effective delivery format for

Pyramid Model practice supports, because onsite coaching around the Pyramid Model has been shown to be more effective than workshops alone (Hemmeter et al., 2016). In contrast, Powell and Diamond (2013) found no difference between onsite and remote coaching conditions on teacher practice implementation in their literacy intervention.

The analyses of coach feedback across onsite versus web-mediated conditions found both remote and onsite conditions had significant effects on teachers’ implementation of identified literacy practices. It is important to note, however, that coaches provided more unique feedback statements when meeting with teachers’ onsite, suggesting the quality of coaching interactions might vary across formats. Additional analyses of the structural and process components of coaching in future studies will likely be required to discern if an onsite versus web-mediated delivery format is more efficacious, for whom, and under what conditions.

55

Child outcomes

The majority of PBC studies reported working with children ages 3-5 in Head

Start or public Pre-K classrooms. Positive child outcomes were reported in 9 of the 17 studies. Statistically significant effects on child learning and development were reported in five studies related to (a) language and literacy and (b) reduced challenging behavior and increased engagement and social skills. One study reported positive descriptive statistics. Three single-subject studies reported mixed results related to decreases in children’s challenging behavior in the classroom.

Summary of Practice-based Coaching Literature

The literature reviewed suggests PBC is effective for supporting preschool teachers’ competence and confidence to implement evidence-based and evidence- informed practices in the classroom setting. Furthermore, teachers’ fidelity of implementation has demonstrated corollary child outcomes in some studies. Additional research is needed to further explore the relationship between the coaching content, structure, and process dimensions of PBC and teacher and child outcomes to apply this framework effectively and efficiently across additional teaching practices, contexts, and learners.

Coaches as Mediators of Teacher’s Coaching Experiences

The effectiveness of coach-based approaches to PD, including PBC, relies on the coaches’ knowledge of the practices that are the focus of coaching and their ability to deliver the components or “active ingredients” as intended. There is a small, but growing body of research that has examined how attributes and behaviors of the coach influence coaching processes. Both qualitative (e.g., Lofthouse, Leat, Towler, Hallet, &

Cummings, 2010) and quantitative (e.g., Downer, Locasale-Crouch, Hamre, & Pianta,

56

2009; Diamond & Powell, 2010, 2013; Conroy et al., 2015) analyses of coach behavior have demonstrated the potential for individual coaches to mediate the teachers’ experience of coach-based PD. Lofthouse et al. (2010) identified differences in the quality of constructive feedback, while Downer et al. (2009) and Powell and Diamond

(2013) identified statistically significant differences between the quantity and type of feedback teachers received from coaches. In addition, the coach appears to influence the number of goal cycles completed (Downer et al.,2009) and the distribution of practices addressed across those goals (Powell, Steed et al., 2010). In a collaborative partnership, however, the teacher also plays a transactional role in these observed differences. Beyond reporting teacher and child outcomes, it appears studies specifically exploring the role of coaches and the integrity with which they implement coaching are important for better understanding how and why coaching supports (do or do not support) the implementation of EBPs in classrooms.

Using Observational Coding Systems to Describe Coach Behavior

Direct behavioral observation is one method with the potential to provide exploratory and descriptive insights into the actions and behaviors used by coaches to promote teacher participation in coaching conversations that occur as part of the collaborative partnership.

Empirical Approach of Studies Using Observational Coding Systems

Direct behavioral observation has been used in six early childhood professional development studies. Four studies used the Routine and Instructional Strategy Coding

Protocol-IL (RISCP-IL; Salisbury, Cambray-Engstrom, Woods, & Friedman, 2008) to examine provider’s implementation of the Family Guided Routines Based Intervention

(Woods, 2005). The fifth study used an unnamed set of operationally defined behaviors

57

(Campbell & Coletti, 2013), and the sixth study used the Early Childhood Coaching

Conversations (ECCC) coding system (Knoche & Bainter, 2012).

Each of the four studies using the RISCP-IL were designed to examine coaching practices of early interventionists providing home-based services related to the child’s individual family service plan (IFSP; Friedman, Woods, & Salisbury, 2012; Marturana &

Woods, 2012; Oborn & Johnson, 2015; Salisbury et al., 2012). Across all four studies, providers included licensed speech language pathologists, occupational therapists, social workers, and early childhood special educators. Providers spoke English or

Spanish. Coded videos included typical early intervention (EI) sessions with families on the providers existing caseload. The RISCP-IL has seven codes, which represent evidence-based EI practices. When coding with the RISCP-IL, coders used the procedures developed by Salisbury and colleagues (2012). The entire EI session video is coded using a 30-second interval recoding procedure. The codes were mutually exclusive and exhaustive (ME & E) ensuring that each interval received one code. In the event that two practices were observed, the code, which comprised the majority of the interval, was selected.

Salisbury and colleagues (2012) used the RISCP-IL to determine the extent to which providers implemented the evidence-based coaching practice and recorded the use of them on the intervention fidelity log. Ninety videos across six providers were included in the sample. Friedman et al. (2012) used the RISCP-IL to describe the influence of a training followed by weekly reflective supervision on providers’ use of evidence-based coaching practice over 18 weeks, and to compare coaching practices used by different programs. Thirty-six videos across 12 providers were included in the

58

sample. Marturana and Woods (2012) provided monthly, web-mediated, video-chat performance feedback to 34 EI providers for 1 year, which resulted in a significant increase, on average, in one EBP and a decrease in one undesirable practice. Osborn and Johnson (2015) also used the RISCP-IL in conjunction with web-mediated coaching to send weekly written and graphic performance feedback to three EI providers over a period of 6 weeks with minimal changes observed using a multiple-baseline design.

Campbell and Coletti (2013) examined the agreement between EI providers and a blind coder as to when coaching strategies did or did not occur. Seventy-five providers submitted between one and three video segments of their regularly scheduled sessions.

Each segment represented what they believed to be one of five strategies introduced during 13 hours of training held over 6 days. Video segments were coded for length, primary functional skill, severity of disability, and one of five coaching strategies.

Agreement between the independent raters and coaches was 85%, indicating providers could identify and implement the strategies when a single strategy was the focus of each video segment.

Jayaraman and colleagues (2015) conducted an exploratory study of 24 coach- teacher dyads’ verbal and gestural behavior after a 3-day workshop on characteristics of effective coaching using the ECCC coding system. Each coach submitted one video of a self-selected coaching meeting focused on an early intervention topic with an existing client. These videos were coded using a discrete 2-minute interval coding system. Findings suggest there was variation in the frequency of behaviors demonstrated; however, there were behaviors used more frequently across all 24

59

participants. Verbal acknowledgment, nonverbal acknowledgment, and clarifies intent had rate per minute data greater than other coded behaviors.

Gaps and Recommendations for Future Studies

Gaps in the extant early childhood direct behavioral observation literature focusing on coaching reflect a need for further research that mitigates the absence of

(a) a coding system that looks specifically at behaviors occurring during a dedicated coaching debrief meeting between the participant and the provider in the absence of child care responsibilities; (b) a common set of instructional provider practices around which coaching occurred (i.e., what the teacher is learning), including any necessary modification of strategies used by coaches; (c) information about the duration of the coaching partnership (i.e., session 2 versus session 15); and (d) the systematic coding of materials used to facilitate the interaction. In addition, the existing systems described above used interval coding procedures that may under or overestimate the occurrence of behavior dependent on the length of the interval and the coding decision rules (Yoder

& Symons, 2010). Each of these is an area for future research to better understand the coaching processes.

Summary of Observational Coding Systems

Direct behavioral observation has a well-established place in studies of young children’s behavior (Pellegrini, Symons, & Hoch, 2014). The use of this method to examine coach-based professional development in early childhood contexts is relatively new. This promising method might serve as an important tool for better understanding the role of the coach in PBC and other coach-based PD initiatives. From this level of specificity, important information can be obtained about the coaching process including

(a) which verbal behaviors are used at higher rates and how they are used to facilitate

60

different coaching foci, and (b) how coaches and teachers make adaptations and accommodations over time, changing their roles and the duration and frequency of coaching processes as the partnership develops. These data are important for understanding how differences in the coaching experience for individual teachers may be attributable, in part, to the individual coach and his or her knowledge, skills, and interactions.

Chapter 3 describes the methods of the present study. It includes information about the development and piloting of the direct behavioral observation system designed for the present study to examine the conversation foci, coach versus teacher initiations, coach verbal behaviors, and use of materials in PBC debrief conversations

61 Table 2-1. The “who”: recipients of systematic follow-up. Systematic Peer Follow-Up Community of Coaching Consultation Mentoring Support Support Practice Group Studies (n = 227) (n = 186) (n = 6) (n = 22) (n = 14) (n = 8) Work Setting Head Start 44% 44% 17% 68% 43% 25% Early Head Start 3% 2% - 9% 7% 13% Child Care 26% 29% - 27% 14% - Special Education 9% 1% - 9% - 13% Early Intervention 7% 5% 17% 5% 14% 25% Family Childcare Home 4% 5% - - - - Pre-School 40% 42% 50% 27% 36% 38% Kindergarten 2% 2% - 9% - - Other 7% 7% 33% 5% 7% - Not Reported > 1% 1% - - - -

Children Served With Disabilities 43% 46% 17% 18% 29% 63% At Risk 43% 41% 50% 77% 36% 25% None with Disabilities 1% 1% - - 7% - Not Reported 12% 11% 33% 5% 29% 13%

Age Infants-Toddler 30% 29% 50% 27% 29% 25% Preschool 89% 90% 83% 96% 79% 75%

Follow-Up Recipient Lead Teacher 72% 72% 67% 19% 79% 50% Paraprofessional 14% 15% 17% 9% 14% 13% Home Care Provider 6% 6% - - 7% 25% Team 11% 12% - 14% 7% 25% Undergraduate/Student 8% 10% - - - - Other 7% 5% 50% 9% 7% 13%

62

Table 2-2. The “who”: providers of systematic follow-up. Systematic Community of Peer Support Follow-Up Coaching Consultation Mentoring Practice Group Support Studies (n = 227) (n = 186) (n = 6) (n = 22) (n = 14) (n = 8) Follow-Up Agent Self 12% 4% 50% 14% - - Peer 11% 7% 50% 9% 36% 50% Supervisor 11% 9% 17% 9% 29% 25% Consultant 22% 18% - 59% 14% 13% Research 48% 54% 50% 27% 21% 25% Other 4% 5% - 5% - 13% Not Reported 9% 10% 17% - - 13%

Agent Qualifications Less than Bachelors 2% 2% - - 7% 25% Bachelors 13% 13% - 18% 7% 25% Masters 23% 22% 33% 32% 29% 13% Higher than Masters 9% 8% 17% 27% - 13% Teaching Experience 39% 34% 50% 18% 43% 25% Experience with Adults 19% 17% 17% 40% 21% - Other 5% 5% - 9% - - Not Described 43% 46% - 27% 29% 38%

Agent Training/Support Study Provided Agent Training/Support 26% 23% 33% 55% 43% 50% One Day Training 2% 1% - - 14% - Multi Day Training 9% 8% - 23% 7% 13% Ongoing Support 15% 14% 33% 27% 14% 13% Periodic Support 5% 4% - 9% 14% 13% Content Focus 19% 17% - 41% 21% 50% Adult Learning Focus 14% 13% - 55% 29% 13% Agent Attended/ Presented Recipient Training 24% 25% 67% 27% 7% 13%

63 Table 2-3. The “what” of follow-up. System atic Peer Follow- Community Coaching Consultation Mentoring Support Up of Practice Group Support Studies (n = 227) (n = 186) (n = 6) (n = 22) (n = 14) (n = 8) Content Area Approaches to 5% 5% - 9% 7% - Learning Language 38% 43% 33% 32% 21% - Development English Language 1% 1% 33% 5% - - Development Literacy 34% 37% 50% 32% 14% 13% Logic and Reasoning >1% >1% - - - - Mathematics 8% 9% - 14% 7% 25% Science 2% 3% - - - - Social Studies >1% >1% - - - - Physical Development 5% 5% - 5% 7% - Social/Emotional 45% 44% - 72% 36% 50% Creative Arts 1% 2% - 5% - - Expression Other 10% 10% - 9% 14% - Not Reported 21% 19% 50% 5% 43% 25%

Instructional Practices Single practice 4% 6% - - - - Group of practices 19% 20% 17% 23% 7% 2% Practices from a multi- 6% 7% - 5% - - component approach Multi-component 14% 17% 17% 5% - - approach Targeted Curriculum 17% 17% 16% >1% 29% 13% Curriculum 23% 23% 33% 23% 29% - Other 14% 15% 33% 5% 7% 25% Not Reported 17% 12% - 41% 57% 25%

Focus of Instruction One Target Child 9% 11% - 9% - - Target Children 5% 6% - - - Small Group 3% 4% - - - - Whole Class 30% 32% 33% 27% 29% 38% Program 4% 3% - 18% - - Other 1% 2% - - - - Not Reported 44% 43% 67% 37% 71% 38%

64

Table 2-4. The “how” of systematic follow-up. Systematic Peer Follow-Up Community Coaching Consultation Mentoring Support Support of Practice Group Studies (n = 227) (n = 186) (n = 6) (n = 22) (n = 14) (n = 8) Duration of Collaboration <1 Week ------1 Week >1% 1% - - - - 1 Month 6% 8% - - 7% - 1 Quarter 9% 10% 33% 9% - - 1 Semester 13% 13% 17% 18% 7% 13% 1 Year 18% 19% - 23% 14% 38% > 1 Year 13% 12% 17% 9% 29% 38% Not Reported 38% 82% 2% 10% 7% 1%

Frequency of Session > 1 per Week 12% 13% - 5% 21% - Weekly 31% 34% 50% 23% - 13% Monthly 23% 24% - 18% 7% 50% Infrequently 4% 5% - 5% - 38% Not Reported 32% 29% 50% 50% 29% 25%

Session Length 0-15 8% 10% - 5% - 13% 15-30 8% 10% - 5% - 13% 30+ 27% 29% 33% 9% 7% 50% Not Reported 58% 54% 67% 82% 93% 25%

Follow-Up Agent Guides Manual 9% 11% 17% 5% 7% - Rubric 1% >1% 17% 9% - - Script/Protocol 7% 8% - 5% - - Other 4% 5% - 5% - - Not Reported 76% 75% 67% 77% 93% -

Fidelity Checklist 8% 8% - 13% 7% - Observation Measure 5% 5% - 14% - - Rating Scale >1% - - - - - Other 9% 10% 16% 18% - 13% Not Reported 80% 81% 83% 59% 93% 87%

65

Table 2-5. Coaching studies aligned with PBC coaching components or strategies. Systematic Coaching plus Follow-Up Coaching + teaching Support practices Studies (n = 227) (n = 186) (n = 54)

Focus on Teaching Practices 28% 29% N/A Needs Assessment 8% 19% 6% Goal Setting 17% 3% 19% Action Plan 2% 3% 4% Observation 61% 65% 65% Performance Feedback Verbal 57% 64% 70% Performance Feedback Written 22% 27% 28% Performance Feedback Graphical 6% 8% 13% Feedback Reflective Journal >1% 1% 2%

66 Table 2-6. Empirical evidence for practice-based coaching. Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Abell, Family Childcare Global Quality Format: Onsite Pre-Post Global quality as Not Reported Ariswalla, Homes (N = 365) Duration: Provider measured by the Putnam, and preference FDCRS3 Miller (2014) Frequency: 1-2/week Length: 2-2.5 hours PBC

Components

Artman- University-based Pyramid Model Format: Video Multiple Team 1: Clear Team 1 child: Meeker & Childcare Teams Practices: Observation and Email baseline effects on social- Lower levels of Hemmeter, (N = 4); Children Transitions, Feedback Single emotional challenging 2012 age 3-4 years (N Rules, Social- Duration: ~7 weeks Subject strategies, behavior; Team 2 = 2) emotional Frequency: ~2.2/week Transitions and child: behavior was PBC Length: Observation = Rules decreased variable across Not reported; Feedback post-feedback; strategies with = ~15 min Team 2: Clear transitions being effects on all 3 most effective strategies

Artman- Head Start (N Global Quality; Format: Web-mediate Randomized Teachers with the Not Reported Meeker, =33) across an Pyramid Model Duration: 12 weeks Group highest levels of Hemmeter, & intervention and Practices Frequency: 1/ two Comparison participation (i.e., Snyder, 2014 BAU condition weeks more videos Length: Observation submitted for PBC =30-60, Debrief =NA feedback and visits to the website) maintained or increased their use of practices; Global quality as measured by the CLASS4 was inconsistent

3 Family Day Care Rating Scale (Harms & Clifford, 1989) 4 Classroom Assessment Scoring System (Pianta, La Paro, & Hamre, 2008)

67

Table 2-1. Continued Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Conroy et al., Head Start and BEST in Class Format: Onsite Randomized Significant increase Significant 2014 Public Pre-K (N = practices Duration: 14 weeks Controlled in use of rules, pre- increase in the 53) across a targeting the Frequency: 1/week Trial correction, behavior overall percent of PBC intervention and prevention of Length: 2 hours specific praise, child engagement BAU condition; emotional instructive feedback, and positive Children ages 3- behavioral corrective feedback, interactions; 5 years (N = 130) disorders and and opportunities to Significant increase child respond decrease in engagement overall percentage of disruptive, aggressive and defiant behavior and negative interactions Conroy, Head Start, BEST in Class Format: Onsite Pre-Post Teacher’s use of the Children’s Sutherland, Public Pre-K and practices Duration: 14 weeks strategies increased problem behaviors Vo, Carr & University based targeting the Frequency: 1/week and maintained after decreased over Ogston, 2014 Childcare (N = prevention of Length: 60 min the intervention time and their 10); Children emotional engagement PBC ages 3-5 years behavioral slightly increased (N = 19) disorders and increase child engagement Fettig & Childcare (N=6); Pyramid Model Format: Onsite/Group Pre-Post Teachers increased Not Reported Artman- Children ages 2- Practices Duration: 6 months in the mean Meeker, 2016 5 years Frequency: 2-3/month percentage of Length 60-90 min indicator PBC implemented as measured by the TPOT5 and a reduction in red flag items

5 Teaching Pyramid Observation Tool (Hemmeter, Fox, & Snyder, 2013)

68

Table 2-1. Continued Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Fox, Early Childhood Pyramid Model Format: Onsite Multiple Teachers increased Not Reported Hemmeter, Special Practices Duration: 60 days Probe their use of practices Snyder, Education Frequency: 1-2/week Single- above baseline levels Binder, & (N = 3) Length: 60-90 min subject and 2 teachers met Clarke, 2011 criterion levels

PBC

Greenwood, Early Childhood Literacy 3D: Oral Format: Onsite Waitlist Teachers in year 1 did Children in year 2 Abbott, Special language, Duration: 26 weeks Randomized not have significant made significant Beecher, Education alphabet Frequency: 1/week Controlled increases in their use linear growth in Atwater, & Year 1 (N= 20), knowledge, and Length: 60 min Trial of practices at the end the PELI Peterson Year 1 (N =10) phonological of the year compared Composite. The (2017) awareness to BAU. Teachers in PELI ™6 year 2 had significant vocabulary and PBC increases in their use comprehension of teacher literacy had significant focus practices and slopes favoring child literacy the children with engagement IEPs compared to their Year 1 BAU scores.

Hemmeter, Public Pre-K Pyramid Model Format: Onsite and Multiple All teachers acquired Class-wide Hardy, (N = 3); Children Practices Web-mediated Probe the practice and challenging Schnitz, ages 3-5 years Duration: Not Reported Single- maintained the behavior was Adams, & Frequency: 2-3/week subject practices above reduced in 2 of Kinder, 2015 Length: Not Reported baseline the 3 classrooms

PBC

6 Preschool Early Literacy Indicators (Abbott, Kaminski, Aguayo, & Latimer, 2014)

69

Table 2-1. Continued Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Hemmeter, Public Pre-K and Pyramid Model Format: Onsite Randomized Significant increase Children in the Snyder, Fox, Early Childhood Practices Duration: 12-16 weeks Controlled in the use of the intervention group & Algina Special Frequency: 1/week Trial practices as had higher scores (2016) Education Length: 10-305 min measured by the than the control (N = 40) across TPOT in comparison group on the PBC intervention and to the BAU condition, SSIS, differences BAU condition; Higher levels of were statistically Children ages 3- emotional support significant 5 years and behavior management on the CLASS for intervention teachers compared to BAU

Hemmeter, Head Start (N = Descriptive Format: Onsite Multiple All teachers Increased Snyder, 43) and praise Observation and Email Probe increased their use of engagement in all Kinder, & Childcare (N = statements to Feedback Single- descriptive praise 4 classrooms; Artman, 2011 1); Children ages comment on Duration: 1.5 to 3 weeks subject and maintained their reduction in 3-5 years children’s Frequency: 2-3/week use of descriptive challenging PBC positive behavior Length: Not reported praise above behavior for 3 of baseline levels the 4 teachers

Hsieh, Childcare (N = Oral language, Format: Onsite Multiple All teachers Paired t-tests Hemmeter, 3), Public Pre-K comprehension Duration: 5.5-7.5 weeks baseline increased their use of revealed that the McCollum & (N = 2); Children of text, Frequency: 1-2/week Single- the strategies in all children Ostrosky, ages 3-5 years phonological Length: 4-5 hours subject areas and maintained demonstrated 2009 awareness and their use of the significantly higher alphabetic strategies above scores on picture PBC principle, and baseline levels naming, print concepts alliteration, and written rhyming, and book language handling after the practices intervention

70

Table 2-1. Continued

Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Johnson, Head Start (N = Pyramid Model Format: Onsite Pre-Post Significant increase Not Reported 2017 30), Early Practices Duration: 1 year in the mean Childhood Frequency: Monthly percentage of PBC Special Length: Not reported indicators Education implemented from collaborative (N = fall to spring as 26), State school measured by the readiness TPOT programs (N = 23), Early Childhood Special Education self-contained (N = 7), Childcare (N = 7); Children ages 3-5

McCollum, Public Pre-K (N = Vocabulary, Format: Onsite Randomized Significant post Not Reported Hemmeter & 12) Phonological Duration: 30 weeks Group intervention group Hsieh, 2011 Awareness, and Frequency: bi-weekly Comparison difference for Print Concepts Length: Feedback 15 Phonological PBC min, Observation not Awareness, and reported Print Concepts; Significant post intervention group difference for 3 areas of the ELLCO7; Most teachers reached 80% criterion in most areas

7 Early Language and Literacy Classroom Observation (Smith & Dickinson, 2002)

71

Table 2-1. Continued Teaching Study Reference Participants Coaching Dose Teacher Outcomes Child Outcomes Practices Design

Neuman & Center based Global Quality; Format: Onsite Randomized Statistically significant Not Reported Cunningham, care (N = 177), Language and Duration: 32 weeks Controlled improvements in 2009 Family Childcare Literacy Frequency: 1/week Trial language and literacy (FCC) Homes (N Practices Length: 60-90 min practices for teachers PBC = 114), across 3 who received Components conditions coursework plus college course, coaching, with on-site coaching, substantial effect BAU sizes for center and FCC providers as measured by the ELLCO and CHELLO8

Neuman & Community Global Quality; Format: Onsite Randomized Statistically significant Not Reported Wright, 2010 center or public Language and Duration: 10 weeks Controlled improvements in the preschool Literacy Frequency: 1/week Trial structural environment PBC teachers (N = Practices Length: 3 hours as measured by the Components 148) across 3 ELLCO, when conditions compared to the other college course, conditions on-site coaching, BAU

Powell, Head Start (N = Language and Format: Onsite or Web- Randomized Statistically significant Statistically Diamond, 88) across on- Literacy mediated Controlled effects were identified significant effects Burchinal, & site and remote Practices Duration: 15 weeks Trial in the areas of were identified in Kohler, 2010 coaching Frequency: biweekly classroom the areas of letter conditions; Length: Onsite = 2 environment, knowledge, PBC Children ages 3- hours, Web-mediated = classroom supports blending skills, Components 5 years (N = 759) 30 min for early literacy and writing, and language concepts about development print

8 Child/Home Environmental Language and Literacy Classroom Observation (Neuman, Dwyer, & Koh, 2007)

72

CHAPTER 3 METHODOLOGY

Systematically describing aspects of the process dimension of coach-based professional development (PD) is critical for understanding how coaches promote teachers’ learning and use of evidence-based practices (EBP) in the classroom. The process dimension of coaching includes the coaching behaviors, strategies, and materials used to promote the teacher’s confidence, competence, and use of an identified set of evidence-based teaching practices. The present study explored how coaches facilitated conversations during the coaching debrief meeting of an on-site

Practice-based Coaching (PBC) partnership. A direct behavioral observation system was developed to investigate (a) the proportion of time allocated to different conversational foci, including who initiated the conversation focus; (b) a description of the verbal behaviors used by coaches to support teachers’ active participation in planning for, implementing, and evaluating the embedded instruction practices that were the practice-focus of the coaching partnership; and (c) an exploration of whether the conversation foci, coach and teacher initiations, and verbal behaviors changed over the course of the partnership (across three occasions). Descriptive methods were used to examine these aspects of the debrief conversation across seven coaches and within each coach-teacher dyad across three occasions (occasion 1 = sessions 1-5, occasion

2 = sessions 6-10, occasion 3 = sessions 11-15). Data for the present study were collected as a part of year one of a larger efficacy trial examining Tools for Teachers

(TfT), a professional development (PD) intervention designed to support classroom teachers to use embedded instruction with preschool children with disabilities (Snyder,

Algina et al., 2015).

73

The purpose of this chapter is to describe the methods used to conduct the present study. The research questions are presented below, followed by a description of the context, measures, participants, procedures, and analytic techniques employed.

Research Questions

The primary aim of the present study was to describe the PBC debrief meeting processes including how the coach and teacher initiated and participated in six conversation foci across three occasions and how coaches facilitated those conversation foci using eight verbal behaviors. The following research questions guided the analyses conducted in the present study:

1. What proportion of the coaching debrief time was spent in each conversation focus across all coaching debrief meetings?

1a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus?

2. Did the proportion of coaching debrief time, spent in each conversation focus, change over time?

2a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus over time?

3. Of the total number of conversation initiations, what proportion were initiated by the coach versus the teacher?

3a. What proportion of each conversation foci was initiated by the coach versus the teacher?

3b. Did differences exist between dyads in the proportion of conversation foci initiations by the coach versus the teacher?

4. Did the proportion of coach versus teacher initiations change over time?

4a. Did differences exist between dyads in the proportion of conversation foci initiations over time?

5. At what rate did coaches use the verbal behaviors across all coaching debrief meetings?

5a. Did differences exist between coaches in the rate of verbal behaviors used?

74

6. At what rate were coach verbal behaviors used within each conversation foci?

6a. Did differences exist between coaches in the rate of verbal behaviors used within each conversation foci?

7. Did the rate of coach verbal behavior change over time?

7a. Did differences exist between coaches in the rate of verbal behaviors used over time?

8. What were the demographic characteristics of each dyad and were there differences in the coaching implementation, proportion of time spent in each conversation foci, coach and teacher initiations, and rate of coach verbal behavior relative to the overall sample?

Context for the Present Study

Data used to conduct the present study were collected within year one of an

Institute of Education Sciences sponsored Goal 3 (Efficacy and Replication) randomized controlled trial study of TfT, a PD intervention designed to support classroom teachers to use embedded instruction with preschool children with disabilities (Snyder, Algina et al., 2015). Embedded instruction is an intentional and systemic instructional approach for promoting children’s acquisition, maintenance, and generalization of skills that support access to and participation in the general preschool curriculum (Snyder et al.,

2013). The PD intervention has four components: (a) four integrated high-quality workshops, (b) implementation guides and materials, (c) a comprehensive, web-based multimedia tool kit, and (d) two variants of coaching (Snyder, Hemmeter, Bishop et al.,

2015).

In year one, 44 preschool teachers from two southeastern states located in proximity to two study performance sites in FL and TN were randomly assigned at each site to one of three conditions (a) 16 hours of workshops, implementation guides and materials, a comprehensive web-based multimedia tool kit, plus 16 weeks of on-site

75

PBC (n =15); (b) 16 hours of workshops, implementation guides and materials, a comprehensive web-based multimedia tool kit, plus 15-weeks of web-mediated self- coaching (n =14); and (c) district provided PD (n =15). There were eight project-based coaches in the larger study, but one coach left the study after three coaching sessions were provided to her teachers. Seven of the coaches worked with their teachers throughout the study. Each coach was assigned to work with one to three teachers at their performance site. For the present study, one teacher per coach was selected.

Teacher selections were based on who received the dose of coaching most aligned with the coaching protocol used in the larger study

Introduction of the Teaching Practices through Workshops

Supporting in situ implementation of the embedded instruction teaching practices were the focus of the on-site coaching sessions. Prior to on-site coaching, teachers participated in four, 4-hour interactive workshops distributed across a 4-week period.

Workshop presenters at each performance site included the principal investigators, site coordinators, and coaches. Workshops provided the teachers with information about the teaching practices associated with embedded instruction, which were the focus of the

PBC partnership. The practices were organized into four components for implementing embedded instruction. A brief description of each of the four embedded instruction components is provided below (Snyder et al., 2013).

 “What to Teach” practices support teachers to identify target behaviors or skills, which build on children’s strengths and are aligned with the general preschool curriculum and the child’s individualized education program (IEP). These target behaviors or skills are designed to increase access to and participation in classroom activities.

 “When to Teach” practices include the identification of natural and logical times of day to provide embedded learning opportunities for the child to practice the target behaviors or skills, with sufficient intensity to make measurable growth.

76

 “How to Teach” practices guide the selection of systematic instructional strategies that support the child to demonstrate the behaviors or skills under supportive conditions during meaningful classroom activities.

 “How to Evaluate” practices foster the teachers’ ability to evaluate their implementation of embedded instruction and the child’s progress, to make data- informed decisions about instruction.

Workshops included a variety of different strategies for delivering content and providing concrete demonstrations of each of the embedded instruction teaching practices. Presentation and active learning strategies included lecture, large- and small- group discussion, video demonstrations, case studies, modeling, practice with feedback, and opportunities for individual planning and self-evaluation. Information about the proportion of time in each learning format is presented in Table 3-1. Within each workshop, teachers received a full-color, bound Workbook and Practice Guide with copies of the slides presented and additional information to extend and apply what was discussed in each workshop. In addition, they received a digital camera for recording their embedded instruction practices in their classrooms. The use of half-day workshops spread across a 4-week period provided time for teachers to apply new or refined skills and teaching practices in between workshops. At the end of the workshop series, teachers were also given access to the password-protected embedded instruction website. The website included electronic versions of the workshop materials, access to a video demonstration library, and additional information about each of the embedded instruction teaching practices including research briefs, “Try-it” activities, teacher planning forms, and online graphing tools for supporting the “How to Evaluate” practices. These materials were available to coaches and teachers for use throughout the on-site coaching sessions.

77

Workshop fidelity. Procedures for delivering the content of the workshops were operationalized using a Trainers Guide with slide scripts and activity descriptions.

In addition, fidelity checklists were completed for each workshop by project staff.

Workshop fidelity in year one was high across both performance sites for all four workshops. At one performance site, the mean percentage of indicators implemented across workshops was 98.3% (range = 97-100). At the other performance site, the mean percentage of indicators implemented across workshops was 98.2% (range =

95.5-100). Across sites, the mean percentage of indicators implemented across workshops was 98.3% (range = 95.5-100).

On-site Practice-based Coaching

Each teacher in the on-site PBC coaching condition was assigned to work one- on-one in his or her classroom with a trained, project-based coach. A description of the coach training, coaching implementation supports, and procedures for implementing weekly coaching sessions are provided in later sections of this chapter. Following the workshop series, coaches were to meet with teachers weekly for 16 weeks to engage in

(a) strengths and needs assessments to inform goal setting and action planning, (b) focused observation, and (c) reflection and feedback. The next two paragraphs provide a description of how the coaching partnership is intended to evolve across coaching sessions and what occurred within each of the on-site coaching sessions.

Across coaching sessions

The components of PBC are implemented across multiple phases of the coaching partnership, as described in the Embedded Instruction for Early Learning

Coach Manual and associated protocols (Snyder, Hemmeter, Bishop et al., 2015).

Figure 3-1 outlines each of the phases.

78

The initial phase of coaching included the coach and teacher interactions across the four, 4-hour workshop series and the orientation meeting. Coaches worked side-by- side with teachers they were assigned to coach during the workshop series to support them with the applied learning activities, to gather information about the teacher’s knowledge of the embedded instruction practices and the application of these practices in their classrooms, to learn about the teacher’s classroom, and to begin to establish a collaborative partnership. Following the workshop series, the coach and teacher held an orientation meeting designed to share information with teachers about their specific role and the role of the coach within the on-going collaborative coaching partnership. During the second phase, the coach and the teacher further develop their partnership through

(a) discussions of the teachers’ strengths and areas for growth, facilitated by a strengths and needs assessment; and (b) the establishment of a shared purpose for the partnership grounded in an action plan goal developed collaboratively by the coach and teacher. In the third phase, the coach and teacher participate in several cycles of coaching focused on the current goal and action plan, which include a focused observation and a debrief conversation. Coaches and teachers typically complete several action plans during this phase, with each action plan lasting approximately 3-5 weeks. In the fourth phase of coaching, the coach supports the teacher to reflect on and self-evaluate his or her progress over the 16-weeks of coaching in relation to each of the embedded instruction teaching practices, which were the focus of coaching. The teachers’ self-evaluation often leads to the identification of areas of strength and areas for continued growth and refinement, as well as the articulation of personal and professional supports for sustaining implementation of embedded instruction. In the fifth

79

and final phase of the partnership, the coach gradually withdraws support through post- intervention emails and a closing meeting at the end of the intervention school year.

Data included in the analyses for the present study represent phases 2-4 (i.e., sessions

1-15) of the coaching partnership, as described in Figure 3-1, Figure 3-2, and the next section.

Within each coaching session

Each of the 15 on-site coaching sessions (i.e., phases 2-4) is composed of a focused observation lasting approximately 1-hour, followed by a coaching debrief lasting approximately 30-60 minutes (Snyder, Hemmeter, Bishop et al., 2015). Both the observation and the debrief components of PBC are guided by an action plan that is written collaboratively during the first coaching session. The action plan includes: (a) the teachers’ goal for implementing embedded instruction, (b) a criterion statement to know when the goal has been met, (c) action steps for achieving the goal, (d) resources and materials needed, and (e) a timeline. Each action plan is designed to support the teacher to plan for, implement, and evaluate the use of embedded instruction teaching practices in the classroom.

Each coaching session begins with the coach conducting a focused observation based on the teacher’s action plan goal or action steps. During the observation, the coach records video or takes notes about the teachers’ implementation of teaching practices; complete embedded instruction learning trials; and how the teacher interacts with children, staff, and within the classroom environment. Coaches make the observation and data collection process transparent to the teachers they are supporting by focusing on the observable and measurable embedded instruction teaching practices the teacher has identified as a goal. Following the focused observation, the coach and

80

teacher meet to debrief. The coaching debrief meeting is a dyadic transactional exchange between the coach and teacher, designed to provide performance-based feedback, facilitate teacher reflection and problem-solving, and support the teacher to plan for future implementation guided by the action plan and the embedded instruction teaching practices. Figure 3-2 provides an overview of the coaching protocol used to guide the coach’s interactions with the teacher during the session.

For the present study, a video-based, direct observational coding system was used to describe aspects of the coaching processes, which occurred within each coaching debrief meeting, across the 15-sessions of on-site PBC. Data about the process variables were gathered at three time points (occasion 1 = sessions 1-5, occasion 2 = sessions 6-10, occasion 3 = sessions 11-15). The coding system was used to gather data about the duration of conversation foci, the number of coach and teacher initiations, and the rate of coach verbal behavior. The development of this tool is described below.

The Coaching Practices Observation Tool

The Coaching Practices Observation Tool Research Version 1.0 (CPOT-RVI;

Shannon & Snyder, 2016) was the primary measure used in the present study. The measure was developed to help quantify selected aspects of the coaching process over time – specifically the focus of the debrief conversations, including who initiated the conversation foci, and the types of coach verbal behavior. The CPOT-RVI was developed by the student investigator in collaboration with a faculty member who developed the PBC framework and who was a principal investigator on the TfT efficacy trial. The development of the CPOT-RVI was an iterative process, incorporating

81

theoretical literature and several rounds of piloting. The iterative processes used to develop and pilot the CPOT-RVI are described below.

Step 1–Reviewing the Coaching Literature

The CPOT development process began with a review of the coaching literature with a particular focus on (a) how to describe coaching processes and (b) the identification of observable and measurable practices used by coaches to facilitate teacher participation in coaching conversations. Three sources, which had previously identified aspects of the coaching process, most strongly influenced the initial development of the CPOT codes. These sources included (a) the PBC coach manuals for the Embedded Instruction for Early Learning Goal 2 development study (EIEL-G2;

Snyder et al., 2009); (b) the coaching manual and protocols from the Pyramid Model

Goal 3 efficacy trial (TPM-G3; Teaching Pyramid Research Project, 2013); and (c) the

Coaching Behavioral Assessment System (CBAS; Smith, Smoll, & Hunt, 1977), a continuous, timed-event athletic coaching coding system for live observation.

Step 2–Operationally Defining Coaching Process

Codes to operationally define the coaching process were developed iteratively by comparing the conversational topics, behaviors, and strategies located in the coaching literature with transcripts of videotaped coaching debrief meetingss. Two PBC coaching debrief videos, one from the EIEL-G2 and one from the TPM-G3 studies, were transcribed and hand-coded by the student researcher to identify examples of the conversational foci/strategies (e.g., problem-solving, reflection, sharing materials and resources) and coach verbal behaviors (e.g. mistake-contingent encouragement, general encouragement, open-ended questions) included in the CPOT Pilot Version 1

82

(CPOT-PVI). Excerpts from the transcripts were used as exemplars of the behaviors specified in the codebook (Figure 3-3).

The transcript coding procedure also led to the development of two types of codes—duration and event. During the transcript coding process, it was evident that some verbal behaviors (e.g., supportive feedback) occurred most often within conversational foci (e.g., reflective conversation). For example, while the teacher reflected on embedded instruction learning trials delivered during the small group activity, the coach provided supportive feedback emphasizing behaviors or actions the teacher did well and how the children responded to the instruction. Codes from this version of the system are shown in Table 3-2.

Step 3–Applying the Codes to Video

The CPOT-PVI codes shown in Table 3-2 were applied to three additional videos of coaching debrief meetings collected in TPM-G3 study. Paper and pencil data collection sheets were used to record the duration and frequency of observed codes. In addition, a spreadsheet was used to organize additional quotes, which exemplified each code to confirm the current codebook captured the observed coaching process. These data informed revisions to the codes and exemplars.

Some codes were collapsed to simplify the system. For example, general encouragement (e.g., Yes, that’s correct!) and supportive feedback (e.g., I like the way you used the visual cue during today’s circle time, to redirect Timothy and to remind him to raise his hand to speak.) were collapsed into a single code, because both verbal behaviors were designed to give the teacher supportive feedback and encouragement to persist in his or her use of a particular teaching practice. Other codes were expanded or added to address verbal behaviors and conversational foci not included in the CPOT-

83

PVI codes. For example, organization (e.g., I’ll see you Tuesday at 9:00am. Can you send me the data ahead of time?) was modified to encompass other verbal behavior related to roles or actions the coach or teacher would have within the coaching session.

The expanded code was named coaching process code (e.g., For our next meeting I’m going to film you interacting with [child], but you’ll continue providing instruction as usual. Okay?). This expansion allowed coaching debrief conversations focused on the roles and responsibilities of the coach/teacher including, but not limited to, scheduling and other logistics to be captured as part of the coaching process.

Prior to collapsing or adding new codes, a developer of the PBC framework worked with the student investigator to develop consensus about which codes were most salient by (a) reviewing the participant quotes, and (b) applying the codes to video clips of coach-teacher interactions during the coaching debrief. These revisions occurred over a period of 3 months during bi-weekly meetings in Summer 2015. At this time, it was also determined that the materials used by coaches to facilitate verbal interactions should be coded. The codes shown in Table 3-3, the product of these revisions, were incorporated into the Coaching Practices Observation Tool- Pilot

Version 2 (CPOT-PV2). The system was moved from paper-pencil coding to the

Observer XT 12® behavioral coding software (Noldus Information Technology, 2015).

Step 4–Conducting a Pilot Study of the CPOT-PV2

A pilot study of the CPOT-PV2 was conducted by the student researcher using the Observer XT 12® software (Noldus Information Technology, 2015) in Fall 2015. The pilot sample included six videotaped coaching debrief meetings from the TPM-G3 efficacy trial. The sample represented three coach-teacher dyads at two time points.

Data were extracted from the coding software and descriptive analytics were used to

84

examine: (a) the duration of each conversational focus during the coaching debrief meeting; (b) which coach verbal behaviors and materials were used most often during the coaching debrief meeting; and (c) whether differences existed across coaches in their conversational focus, verbal behavior, or use of materials during the coaching debrief meeting. The results of this pilot study were used to inform additional CPOT code revisions, the development of the CPOT Research Version I Manual (CPOT-RVI;

Shannon & Snyder, 2016), and the present study.

Based on pilot study findings, the system was refined. Duration codes were refined in three ways. First, the Information Seeking code was removed from the conversation foci duration codes given its overlap with Goal Setting and Action Planning and Reflection and Feedback codes. The Information Seeking duration code was transformed to an event code, Instructional Statements, a verbal behavior related to the coaches’ intent to teach or inform, which was not captured by the existing verbal behavior codes. Second, the Other code was removed from the conversation foci codes and replaced with Uncodeable. All video footage where the coach and teacher were present and engaged in discussion fit into a conversation foci. Uncodeable was used when the coach or teacher were not visible in the frame or engaging in an activity that prevented their participation in the coaching discussion. For example, time during the session where the teacher had to address a question from a parent or teaching assistant. Third, a forced binary response code following the onset of a new conversation focus was added to capture who initiated each conversation focus (i.e., coach or teacher).

85

Event codes were also refined in three ways. First, Directives were removed from the coach verbal behavior because the rate of occurrence was very low and was judged to not meaningfully contribute to understanding the types verbal behaviors used as part of the PBC framework. In addition, there were changes to increase the specificity of two coach verbal behaviors. The operational definition for Supportive Verbal Feedback was revised and a new code, General Praise and Agreement, was created to distinguish less descriptive supportive statements from Supportive Verbal Feedback. Questioning was also split into two separate codes: Clarifying Questions and Probing Questions.

Step 5–Piloting the Revised CPOT-PV2 System Using Observational Coding Software

Refinement of CPOT-PV2 was conducted by the student researcher using the

Observer XT 12® software (Noldus Information Technology, 2015) in the Summer and

Fall of 2016 to explore interobserver agreement. These refinements included coding of

10, 3- to 7-minute video clips by the student investigator, a research scientist, and two graduate students familiar with the PBC framework and embedded instruction. Based on the coding of these clips, examination of interobserver agreement, and subsequent discussions, the system was refined in two ways. The codes for materials were removed due to video quality; materials were not consistently visible and inference based on verbal behavior was not reliable. Second, the conversation focus code Review

Statement was renamed Summarizing to support coders understanding of the operational definition for the code. A brief transcript exemplifying how the codes were applied to coach-teacher interactions is shown in Figure 3-4. These revisions led to the

CPOT-RVI used in the present study, which is described in the next section.

86

Measures

The present study included three types of measures. The CPOT-RVI observational coding system was the primary study measure. It was used to quantify the conversation focus, coach and teacher initiations, and coach verbal behavior across three occasions. In addition, two demographic questionnaires completed as part of the larger study by the teachers and coaches and information gathered about each coach- teacher dyad’s weekly sessions during the TfT efficacy trial, including action plans and implementation fidelity of the coaching protocol, were used to support the interpretation of data obtained in the present study. Each measure is described below.

Coaching Practices Observation Tool

The CPOT-RVI is a continuous, timed-event, observational coding system designed to quantify the duration of conversation focus, frequency of coach and teacher initiations, and the frequency of coach verbal behavior.

The CPOT-RVI duration codes provide information about the length and focus of the conversation. There are seven mutually exclusive and exhaustive duration codes.

Five duration codes represent conversational exchanges between the coach and the teacher, one code (Summarizing) represents summary statements about what occurred in the observation that are made by the coach only, and “Uncodeable” indicates the video cannot be coded (e.g., coach or teacher participant leaves the frame). Initiation is a binary code, which is used following the onset of a new conversational focus duration code to signify whether the coach or teacher initiated the change in the conversation focus. The CPOT-RVI event codes provide information about the coach’s verbal behaviors. There are eight event codes, which are counted at the onset of the first

87

occurrence of a unique verbal behavior within a stream of verbal behavior. The codes and definitions are shown in Table 3-4.

The CPOT-RVI is scored using Observer XT 12® behavioral coding and analytic software (Noldus Information Technology, 2015). The data produced through the application of the CPOT-RVI codes can be quantified to determine (a) the amount of time spent during the debriefing meeting in each conversation focus, (b) how often each participant initiated the conversation focus, and (c) how often coaches used the verbal behaviors during the debriefing meeting. Duration data can be integrated with verbal behavior data to determine the rate of verbal behavior and if particular verbal behaviors were used during different conversation foci. The analytic techniques used in the present study are described later in this chapter. Additional information about this measure can be found in the CPOT-RVI manual (Shannon & Snyder, 2016).

Demographic Questionnaires

Description of the classroom and teacher profile

Teachers enrolled in the larger study completed a project-designed classroom and teacher profile. This questionnaire was completed in a paper/pencil format. Part I of the form addressed teacher demographics including gender, race/ethnicity, age, education level, teacher state certifications and endorsements, and coursework or specialized training in early childhood or early childhood special education. Part II of the form gathered information about the teacher’s professional experience in early childhood and early childhood special education, as well as other job-related responsibilities (e.g., mentor, student teacher supervisor). Part III of the form was designed to obtain information about the teacher’s classroom, including the curriculum being implemented, total number of instructional minutes in the day, teacher-child ratios,

88

and how and which therapy services are provided to children enrolled in the teacher’s classroom.

Description of the project staff demographic form

All project staff completed a demographic questionnaire online using REDCap©

(Harris et al., 2009) providing three types of information. Part I of the questionnaire addressed staff demographics including gender, race/ethnicity, age, education level, teacher state certifications and endorsements, and coursework or specialized training in early childhood or early childhood special education. Part II of the questionnaire gathered information about the staff member’s involvement with the project, including coaching, data collection, or data processing tasks. Part III of the questionnaire used branching logic to obtain information about the staff member’s work experience related to their project role. Coaches responded to questions regarding their familiarity with embedded instruction and PBC, and prior experiences being coached or providing coaching/consultation to others.

PBC Weekly Coaching Session Data

Data about the weekly PBC coaching sessions were referenced to inform the interpretation of data collected via the CPOT-RVI. These data were (a) coaching dose,

(b) coaches’ average fidelity of implementation to the coaching protocol, (c) the number of action plans completed during the 15-weeks of coaching, and (d) the number of embedded instruction teaching practices each coach reported addressing with the teacher(s) she was assigned to work with for the study.

Coaching dose

Coaching dose is a description of the total number of coaching sessions that occurred between each coach-teacher dyad, including duration and the length of time

89

between each session. The coaching dose data informed the selection of dyads for the present study and the interpretation of the CPOT-RVI data by documenting the frequency and intensity of coach-teacher interactions in the larger study.

Coaching fidelity

Videotaped coaching debriefs were filmed as a component of the TfT efficacy trial. Three types of fidelity data were collected by an independent observer for each debrief meeting: (1) coach fidelity of implementation of the coaching protocol; and (2) log fidelity, a point-by-point comparison of the coder’s protocol and the coach’s protocol to determine agreement between the rater’s data and the coach’s self-report data; and

(3) intercoder agreement among raters for 33% of all sessions coded for coach fidelity of implementation of the coaching protocol.

Videos were viewed and scored using a coaching protocol procedural fidelity checklist by an independent observer who was reliable with the project coordinator at or above 80% on two practice videos prior to coding fidelity for the study. The majority of protocol indicators are required to be marked “yes” or “no” for completion, but some indicators can be scored “not applicable.” If a protocol indicator is “not applicable,” the indicator is not included in the total for the fidelity calculations. Four blocks of sessions were created that corresponded to the coaching protocol relevant for the present study (i.e., session 1, sessions 2-7, sessions 8-14, session 15). Then 40% percent of session 1, 33% of sessions 2-7, 33% of sessions 8-14, and 38% of session

15, were randomly selected and coded by a trained observer for fidelity to the coaching protocol implementation fidelity and log fidelity. In addition, intercoder agreement for the procedural fidelity coding was completed for 33% of session 1, 32% of sessions 2-14, and 40% of session 15. These data are shown in Table 3-5. The coaching fidelity data

90

informed the interpretation of the CPOT-RVI data by demonstrating the extent to which the coaches implemented PBC as intended.

Action plans

Coaches submitted an action plan form each time the coach and teacher developed a new goal. Each action plan contained five components as previously described. Action plan development was required in coaching session 1 and occurred as needed in sessions 2-15. The action plan data reviewed for the present study included the number of action plans developed and the content focus of the goal. The content of goal and action steps were reviewed prior to coding the coaching debrief meeting to help inform the CPOT-RVI coders decisions about when the action plan was being discussed.

Teaching practices matrix

Coaches used a table to record the embedded instruction teaching practices addressed with each action plan. These practices were the focus of the workshops, teacher self-assessment, and coach performance-based feedback. Coaches could address the practices in any order and level of intensity based on the teachers’ strengths, needs, preferences, or motivations. The teaching practices matrix informed the interpretation of the CPOT-RVI data by describing which and how many teaching practices were addressed across all coaching sessions.

Participants

Participants in the present study included seven coach-teacher dyads. Coaches were affiliated with the two university-based performance sites. Teachers were certified public school teachers in four districts in two southeastern states. The student investigator in the present study served as the lead coach on the TfT study. In this

91

section, descriptions of the coach training and teacher recruitment procedures are provided. The characteristics of each coach-teacher dyad are presented in Chapter 4.

Coach Training and Support Procedures

Training

Project-based coaches participated in a minimum of 16 hours of training across

2 days. The training was facilitated by two of the principal investigators on the project and the lead coach who was the student investigator in the present study. Four hours were dedicated to introducing the embedded instruction teaching practices. Twelve hours were focused on the theoretical and empirical support for PBC, including (a) how to implement each component of the PBC framework; (b) how to use the coach manual and protocols; (c) supplemental materials to inform and support coaching; and (d) coaching practices designed to facilitate teacher participation in planning for, implementing, and evaluating use of the teaching practices. The coach training also employed several active learning strategies, including(a) videos of preschool teachers implementing embedded instruction practices and coach-teacher dyads engaged in coaching debrief meetings, (b) opportunities to interact with the manual and protocols,

(c) role-play with performance-based feedback, (d) small and large group discussion, and (e) activities designed to give coaches experience with the teacher embedded instruction planning forms.

On-going implementation support for coaches

Throughout the duration of on-site coaching, regular ongoing/group and intermittent/individual support around how to implement the coaching protocol and implementation fidelity of protocol indicators was provided to coaches. Coaches participated in two meetings each week, lasting approximately 60-minutes each, with

92

their university site-based coaching team and the cross-site coaching team. During these meetings, coaches shared celebrations and challenges from their field experiences, collaboratively problem-solved how to support teachers and children enrolled in the study, and discussed strategies and resources for coaching and for supporting teachers to implement each of the embedded instruction teaching practices.

Fidelity feedback

Coaches received individual fidelity of implementation feedback by phone or email on their delivery of the protocol and their ability to accurately record the use of coaching strategies on three or more occasions throughout the year. Feedback included specific indicators to review and recommendations for aligning coaching practices with the PBC protocol and coaching strategies.

Teacher Recruitment

Teachers were recruited from four public school districts in two southeastern states to participate in year one of the larger efficacy trial of the TfT PD intervention.

They were randomly assigned at each site to one of three experimental conditions. A subset of the teachers recruited participated in the on-site coaching condition—the focus of the present study. Recruitment occurred in collaboration with preschool administrators in each district. Project staff met with teachers during district-wide preschool inservice days and scheduled individual meetings to provide a description of the project and to gather written informed consent from teachers who were interested in participating.

To participate, teachers were required to (a) be working in a preschool classroom serving children ages 3-5 years, including at least two children with identified disabilities who received services under an IEP; (b) be certified to teach in the state in elementary,

93

early childhood, or special education settings; and (c) demonstrate the classroom environment was of minimal or higher quality as measured by the Early Childhood

Environment Rating Scale, 3rd ed. (Harms, Clifford, & Cryer, 2015).

Procedures

This section describes the data collection and analyses procedures used to conduct the present study. Procedures will be reported in the following sequence: (a) video coder training procedures, (b) selecting a subset of coaching debrief videos from the larger study, (c) coding study videos using the CPOT-RVI codes in Observer XT 12® behavioral coding and analytic software (Noldus Information Technology, 2015), (d) conducting reliability observations and calculating interrater agreement, (e) extracting data from the data collection software, and (f) conducting data analysis.

Video Coder Training Procedures

Preference when selecting additional video coders beyond the student investigator was given to those who had experience with (a) the PBC framework, (b) working with young children and teachers in the classroom setting, (c) coaching in a school- or center-based setting, and (d) those who had experience with observational coding systems. The secondary coder participated in a 3-day training with the student investigator. The training included (a) an introduction to the embedded instruction teaching practices, PBC framework, and the study in which the data were collected; (b) a detailed review of the CPOT-RVI coding manual using video exemplars of each code; and (c) an opportunity to practice coding short video clips simultaneously with the student investigator using the coding manual and paper record forms. Prior to coding for reliability, the student investigator provided the coder with a computer-based orientation to the Observer XT® coding software (Noldus Information Technology, 2015).

94

Following the training, the secondary coder met the reliability criterion with practice videos prior to coding study footage. The reliability criterion was overall interobserver agreement at or above 80% with the student investigator on three videos, with one or more videos at or above 80% for each code. The secondary coder reached the criterion level of performance in five sessions. Across the five training sessions, mean interobserver percent agreement was 85.1% (range 66.1% to 93.4%). In addition to the training described above, a booster session was provided following the fourth study video coded by the secondary coder when overall agreement fell below 80% with the student investigator for two consecutive videos (i.e., 66.0% and 79.2%). The booster session included a 3-hour review of the manual, training materials, and coding errors.

Percent agreement by code and overall percent agreement and kappa data for videos included in the present study sample are reported as before the booster, after the booster, and overall in Chapter 4 (Tables 4-4, 4-5, and 4-6).

Selection of Coaching Debrief Videos

Coaches were to record the coaching debrief meeting of the on-site coaching session each week within the TfT intervention. Coaches were instructed to record the entire session, regardless of duration. However, according to the coach manual, a complete coaching debrief meeting for sessions 1 and 15 should be a minimum of 30 minutes and generally should not exceed 90 minutes, while 2-14 should be a minimum of 20-minutes and should generally not exceed 60 minutes (Snyder, Hemmeter,

Bishope et al., 2015). Debrief meetings were 48 minutes on average, ranging from 13 to

118 minutes. Debriefs shorter than 20 minutes were excluded from the randomly selected sessions for the present study (n = 2).

95

The video database from which videos for the present study were selected had

210 complete debrief recordings across 15 coach-teacher dyads. For coaches who worked with more than one teacher, a single teacher was selected for the present study based on the availability of complete video recordings and consistency with the planned dose of coaching in the larger study. Inconsistency in the delivery of weekly coaching sessions occurred due to(a) one coach and two teachers leaving the project mid-year, and (b) two teachers who took personal or medical leave from the study for a period of 1 month. As previously described, the coaching partnership occurs in phases. The orientation meeting (part of phase 1) and the post-intervention and closing meeting (part of phase 5) were not included in the present study. Although important to the overall implementation of PBC, these interactions do not include a focused observation, and they present limited opportunities to implement each of the conversation foci and coach verbal behaviors (see Figure 3-2 for additional information).

Videos were randomly selected for seven coach-teacher dyads on three occasions across sessions 1 to 15 (i.e., sessions 1-5, 6-10, 11-15). Prior to random selection, the videos were blocked into three time periods to ensure the observations were distributed and representative of the coach-teacher interactions across the duration of the partnership during phases 2 to 4 (see Figure 3-5, Table 3-6).

Coding Procedures

This section provides a brief summary of the coding procedures. Readers should refer to the CPOT-RVI Manual (Shannon & Snyder, 2016) for detailed information about the coding procedures and decision rules. The Observer XT 12® behavior and analytic software program (Noldus Information Technology, 2015) was used with the CPOT-RVI codes. Videos were imported into the software and coded in two viewings, first for

96

verbal behavior event codes and second for conversation focus duration codes. Prior to viewing the video, the coder reviewed the action plan associated with the selected session and recorded the occasion, coach, teacher, and coder identifiers.

Counting coach verbal behavior

In the first viewing, the coder observed and coded the coach’s verbal behavior when the coach and teacher were visible in the frame. The observer recorded one of the eight verbal behavior codes at the beginning of the first complete behavior representing a new code. A complete behavior was typically a sentence, but could also be an incomplete phrase, which conveyed meaning (e.g., “sounds good”). Verbal behaviors were defined as a minimum of two words. Coach verbal behaviors were counted when (a) the coach exhibited a different verbal behavior following an initial behavior, (b) a coach verbal behavior followed teacher talk of 2 or more words, or (c) when a coach verbal behavior occurred after code-able footage resumed following a period of uncodeable footage. Consecutive coach talk of the same type as defined by verbal behavior codes (e.g., neutral statement, neutral statement, neutral statement) was counted as a single verbal behavior. Verbal behavior was not coded when the video was set to uncodeable, as described in the next section.

Duration of conversation focus and initiation

To initiate the second viewing, the coder returned to the beginning of the recording. During this pass, the coder applied one of the six duration of conversation focus codes or the uncodeable code. Uncodeable was used when the coach or teacher engaged in a behavior that prevented their participation in the coaching conversation

(e.g., on the phone, talking to another person in the room) or one of the participants

(i.e., coach or teacher) left the frame. Uncodeable was the default code when the video

97

began and was used until the onset of another duration code occurred. Duration codes are mutually exclusive and exhaustive; that is, the onset of a new duration code automatically stops the previous code. The onset of a duration of conversation focus code can occur in two ways:

1. The coach-teacher dyad engaged in three turns (i.e., coach-teacher-coach or teacher-coach-teacher) related to a conversation focus code. These include Personal/Pleasantries, Coaching Process, Goal Setting and Action Planning, Problem Solving, and Reflection and Feedback.

2. The coach engaged in three or more consecutive statements summarizing something that occurred during the classroom observation. Only the coach could initiate the Summarizing code.

When a duration code was selected, a required binary response code was used to document who initiated the conversation focus. See Figure 3-3 for a duration of conversation focus exemplar from the CPOT-RVI manual. Each of the selected videos in the present study was coded for the entire duration of the recording.

Interobserver Agreement

The student investigator for the present study viewed and coded 21 coaching debrief meeting videos, one per coach from each of the three sampling occasions described previously, using the CPOT-RVI. From the 21 debrief meeting videos, 33% (n

= 7) of the videos were randomly selected for interobserver agreement. In addition, videos where the student investigator served as the coach (n = 3) were purposively sampled for interobserver agreement. The second trained observer coded 48% (n = 10) of the debrief videos to gather interobserver agreement data. The purpose of interobserver agreement was to determine the extent to which two observers saw the same occurrence or duration of the same behavior and applied the same code.

Interobserver agreement is essential to behavioral observation research because it

98

transforms data from an individual’s interpretation to something closer to what occurred during the observation as defined by the codes (Yoder & Symons, 2010). The continuous, timed-event sampling method adopted by the CPOT-RVI had the advantage of time marked occurrences and the ability to determine if observers agree on both when they observed a behavior and what behavior they observed.

The CPOT-RVI required point-by-point agreement on the time a verbal behavior occurred and the onset of a duration conversation focus code within a 5-second window and the code or type of behavior observed. Agreement was calculated as the ratio of agreements to the sum of agreements and disagreements multiplied by 100. In addition to percent agreement, the kappa statistic was calculated to estimate the potential impacts of agreement due to chance. Kappa was calculated as the ratio of expected agreements to the sum of possible agreements (Cohen, 1960).

Data Extraction and Analyses

Data were exported from Noldus© (Noldus Information Technology, 2015) and analyzed in SPSS© (IBM Corporation, 2016) and Microsoft Excel© (2017). A description of each analyses performed to address the present study research questions is provided below.

Conversation focus

1. The proportion of the coaching debrief time spent in each conversation focus across all 21 coaching debrief meetings was calculated by(a) summing the total time spent in each conversation focus code across all 21-observed coaching debrief meetings, (b) summing the total coaching debrief time, excluding uncodeable, (c) dividing each conversation focus by the total time, and (d) multiplying each value times 100.

1a. Differences in proportion of time each conversation focus was calculated for each coach-teacher dyad by (a) summing the total time in each conversation focus across three-observed coaching debrief meetings for each coach, (b) summing the total coaching debrief time for each dyad, excluding uncodeable,

99

(c) dividing each conversation focus by the total time, and (d) multiplying each value times 100.

2. To explore changes in the proportion of debrief time spent in each conversation focus across the coaching debrief meetings, the procedures described in #1 were conducted for each occasion. Each of these analyses included seven coaching debrief meetings, each representing a unique coach-teacher dyad at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (i.e., sessions 11-15).

2a. Change in the proportion of debrief time spent in each conversation focus across the coaching partnership debrief meetings for each of the seven coach- teacher dyads was calculated using the procedures described in question #1a and was done three times. Each analysis included three debrief meetings, each representing a unique coach-teacher dyad at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (i.e., sessions 11-15).

Initiations

3. The proportion of coach versus teacher initiations was calculated by (a) summing the coach initiations and the teacher initiations across all 21-observed coaching debrief meetings, (b) dividing the sum of coaching initiations and teacher initiations by the total number of initiations across all 21-observed coaching debrief meetings, and (c) multiplying each value times 100.

3a. The proportion of coach versus teacher initiations associated with each conversation focus was calculated by (a) summing the coach initiations and the teacher initiations associated with each conversation focus across all 21- observed coaching debrief meetings, (b) dividing the sum of coach initiations and teacher initiations by the total number of initiations associated with each conversation focus across all 21-observed coaching debrief meetings, and (c) multiplying each value times 100.

3b. The proportion of coach versus teacher initiations associated with each conversation focus for each coach-teacher dyad were calculated by (a) summing the coach initiations and the teacher initiations associated with each conversation focus, (b) dividing the sum of coaching initiations and teacher initiations by the total number of initiations across all three-observed coaching debrief meetings, and (c) multiplying each value times 100.

4. The proportion of coach versus teacher initiations over time was calculated by (a) summing the coach initiations and the teacher initiations across all 21- observed coaching debrief meetings at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (sessions 11-15); (b) dividing the sum of coach initiations and teacher initiations by the total number of initiations across all 21-observed coaching debrief meetings at occasions1, 2, and 3; and (c) multiplying each value times 100.

100

4a. The proportion of coach versus teacher initiations associated with each conversation focus over time was calculated by (a) summing the coach initiations and the teacher initiations associated with each conversation focus across all 21-observed coaching debrief meetings at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (sessions 11-15); (b) dividing the sum of coach initiations and teacher initiations by the total number of initiations associated with each conversation focus across all 21-observed coaching debriefs at occasions 1, 2, and 3; and (c) multiplying each value times 100.

4b. The proportion of coach versus teacher initiations for each dyad were calculated over time by (a) summing the coach initiations and the teacher initiations associated with each conversation focus at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (sessions 11-15); (b) dividing the sum of coaching initiations and teacher initiations by the total number of initiations at each time; and (c) multiplying each value times 100.

Overall verbal behavior

5. The rate of each verbal behavior used by the coaches across all 21-observed coaching debrief meetings was calculated by (a) summing the total number of each verbal behavior that occurred across all 21 observed debrief meetings, (b) summing the total duration of all debrief meetings in minutes, excluding uncodeable (c) dividing the total number of times each verbal behavior occurred by the total number of minutes, and (d) multiplying the value times 5 to represent the rate of each verbal behavior per 5 minutes.

5a. The rate of each coach’s verbal behavior across all three-observed coaching debrief meetings was calculated by (a) summing the total number of each verbal behavior that occurred across all three debrief meetings for each coach, (b) summing the total duration of debrief meetings in minutes, excluding uncodeable, (c) dividing the total number of times each verbal behavior occurred by the total number of minutes, and (d) multiplying the value times 5 to represent the rate of each verbal behavior per 5 minutes.

6. To explore whether the rate of coach verbal behavior changed across the coaching partnership, the procedures described in #5 were repeated three times. Each analysis included seven-observed coaching debrief meetings each representing a unique coach-teacher dyad at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (i.e., sessions 11-15).

6a. To explore whether the rate of coach verbal behavior changed across the coaching partnership for each coach-teacher dyad the procedures described in 5a were repeated three times. Each analysis included three-observed coaching debrief meetings each representing a coach-teacher dyad at occasion 1 (i.e., sessions 1-5), occasion 2 (i.e., sessions 6-10), and occasion 3 (i.e., sessions 11-15).

101

Verbal behavior by conversation focus

7. The rate of coach verbal behavior used within each conversation focus across all 21-observed coaching debrief meetings was calculated by (a) summing the total number of each verbal behavior that occurred within each conversation focus across all 21-observed debrief meetings, (b) summing the total duration of each conversation focus across all debrief meetings in minutes, excluding uncodeable, (c) dividing the total number of times each verbal behavior occurred by the total number of minutes, and (d) multiplying the value times 30 to represent the rate of each verbal behavior per 30 minutes.

7a. The rate of each coach’s verbal behavior used within each conversation focus across all three-observed coaching debrief meetings was calculated by (a) summing the total number of each verbal behavior that occurred within each conversation focus across all three-observed debrief meetings, (b) summing the total duration of each conversation focus across three debrief meetings in minutes, excluding uncodeable, (c) dividing the total number of times each verbal behavior occurred by the total number of minutes, and (d) multiplying the value times 30 to represent the rate of each verbal behavior per 30 minutes.

Coach differences

8a. Differences in coach-teacher dyads adherence to the debrief implementation protocol is calculated by summing the number of protocol indicators checked as completed, dividing this number by the total number of protocol indicators on the coaching log minus any indicators checked as not applicable, and multiplying by 100. These percentages were summed across all randomly selected fidelity sessions in the larger study for each coach and divided by the total number of fidelity sessions.

8b. The procedures in #8a were repeated for the coaching debrief sessions selected for use in the present study.

8c. Differences in coach-teacher dyads total time spent in each coaching debrief meeting were calculated by (a) summing the duration across sessions 1-15 for all teachers who worked with the coach in the larger study, (b) dividing the total duration for each coach by the sum of all sessions to obtain the average coaching session duration for each teacher.

8d. The procedures in #8c were repeated for the coaching debrief sessions selected for use in the present study.

8e. The total number of action plans developed by all teachers the coach worked with in the larger study were counted and divided by the total number of teachers the coach worked with to calculate the average.

8f. The procedures in #8e were repeated for the coaching debrief sessions selected for use in present study.

102

8g. The total number of embedded instruction practices addressed by each coach across all teachers coached in the larger study was summed, the sum was divided by 14 (total number of practices) and multiplied by 100.

8h. The procedures in #8g were repeated for the coaching sessions with the teacher selected for this study.

103

Table 3-1. Percentage of total time in each instructional learning format for cohort 1 fall modules across sites. Across Module 1 Module 2 Module 3 Module 4 Modules Lecture 40.1 36.3 46.4 50.6 43.3 Sm. Group work 10.0 10.3 1.1 0.7 5.5 Sm. Group 1.1 6.5 8.9 0.0 4.1 Discussion Individual Work 12.5 15.1 21.5 4.5 13.4 Lg. Group Discussion 16.3 22.4 10.6 28.7 19.5 Q & A 10.2 2.9 7.6 8.3 7.2 Demonstration 2.0 0.2 0.2 0.4 0.7 Watching Video 5.0 6.1 3.6 6.9 5.4 Other 2.8 0.2 0.0 0.0 0.7 Note. SM = small. Lg = large. Q & A = Question and Answer.

104

Table 3-2. Coaching practices observation tool-pilot version 1. Level 1- Strategy Codes Level 2- Coaching Behaviors (Duration Codes) (Event Codes)

 Reflective Conversation  Mistake contingent technical  Problem Solving instruction  Performance Feedback  Goal focused technical  Collaborative Goal Setting instruction  Organization  Reinforcement  Rapport Building  Reinforcement and elaboration  Other  General encouragement  Mistake contingent encouragement  Role-play  Demonstration  Open-ended questions  No coaching behaviors coded

105

Table 3-3. Coaching practice observation tool-pilot version 2. Level 1- Conversation Level 2- Coach Verbal Level 3 – Materials Focus (Duration Codes) Behavior (Event Codes) (Modifies Level 2)

 Coaching Process  Demonstration  Classroom  Goal Setting and  Directives Materials Action Planning  Neutral Statements  Content Resources  Information Seeking  Supportive Verbal  Video  Problem Solving Feedback  Visual Data  Reflection  Constructive Verbal  No visual  Review Statement Feedback  Personal/  Questioning Pleasantries  Other

106

Table 3-4. Coaching practices observation tool-research version 1.0 operational definitions. Coach Verbal Behaviors (Event Codes)

Neutral The coach makes neutral verbal statements of 2 or more words. Statements

General Praise The coach provides general positive statements or statements of agreement about something the teacher said, did during the observation, or a product (e.g., planning form) the teacher made that is aligned with the implementation of targeted practices or classroom quality.

Supportive The coach provides specific praise by describing positive aspects Feedback of something the teacher said, did during the observation, or a product (e.g., planning form) the teacher made that is aligned with the implementation of targeted practices or classroom quality.

Constructive The coach responds to something the teacher said, did during the Feedback observation, or a product (e.g., planning form) the teacher made that is either not aligned with targeted teaching practices or not implemented as intended by providing guidance on how to improve implementation fidelity.

Instructional The coach makes statements to inform or teach about how to Statements enhance future implementation of the targeted practices or classroom quality.

Clarifying The coach asks the teacher a question to confirm understanding, Questions actions, or scheduling.

Probing The coach asks the teacher a question to encourage him/her to Questioning share personal opinions, perspectives, or feelings related to the classroom, target children, or target practices.

Demonstration The coach engages in role-play OR provides a live verbal model of how to implement a practice.

107

Table 3-4. (Continued). Conversation Focus (Duration Codes)

Personal/ A verbal exchange initiated by the coach or teacher to engage in Pleasantries pleasantries or share personal information that is not related to the implementation of target practices.

Coaching A verbal exchange initiated by the coach or teacher, where Process participants discuss the coaching process including individual roles, activities to be completed, scheduling, and materials.

Goal Setting and A verbal exchange initiated by the coach or teacher, in which 1) Action Planning goals or action steps are identified, and revised AND/OR 2) the coach provides goal focused instruction designed to increase the teachers’ capacity to implement target practices.

Problem Solving A verbal exchange initiated by the coach or teacher to discuss a problem related to implementation of target practices or target children’s needs is identified AND potential solutions to the identified problem are generated by the participants.

Reflection and A verbal interaction initiated by the coach or teacher, in which 1) Feedback participants engage in a discussion about what occurred during the observation AND 2) share their personal opinions, perspectives, or feelings related to the classroom or target practices.

Summarizing The coach makes one or more statements about what occurred in the observation OR summarizes what has occurred previously during the debriefing conversation. This code does not include any teacher talk.

Uncodeable The coach or teacher behavior cannot be accurately recorded because one of the following events occurred for 5 or more seconds: (a) Out of frame- the coach or the teacher leaves the frame of the camera; (b) Talking is directed to someone other than the coach.

108

Table 3-5. Cohort 1 Coaching fidelity by debrief session type for larger study. Interrater Coach-Coder Debrief Fidelity Agreement Agreement Session Type M (SD) M (SD) M (SD) Session 1 79 (18.4) 90 (7.8) 86 (12.4) Sessions 2-14 91 (6.7) 87 (5.9) 89 (5.2) Session 15 93 (4.1) 93 (4.2) 93 (3.0)

109

Table 3-6. Available video footage from larger study for included dyads. Sessions Sessions Sessions Dyad Excludeda 1-5 6-10 11-15 Dyad 1 5 3 5 0 Dyad 2 2 5 5 0 Dyad 3 5 5 3 2 Dyad 4 5 3 4 3 Dyad 5 4 5 3 3 Dyad 6 5 3 5 2 Dyad 7 5 5 5 0 a Video file is invalid, missing, or incomplete

110

Workshops &  Attend four 4-hour interactive workshops More Support Orientation to gain shared knowledge and Meeting terminology associated with the practice- Phase 1 focus  Learn about the classroom and target children  Review the coaching process  Discuss coaching strategies  Schedule the first observation Session 1  First classroom observation (approximately 60 min) Phase 2  First debrief (approximately 60 min)  Review needs assessment and develop the first action plan Sessions 2-14  Classroom observation each week (approximately 60 min) Phase 3  Debrief each week (approximately 30-60 min)  Development of action plans as needed Session 15  Final classroom observation (approximately 60 min) Phase 4  Final debrief (approximately 60 min)  Review key practices and action plans completed to date  Plan for sustainability Post-Intervention  Teachers continue to implement Phase 5 embedded instruction in their classrooms  Coaches check in every 2 weeks by email to offer support and answer questions Closing Meeting  End of the year meeting to reflect on

Less Support progress to date and to discuss the sustainability year

Figure 3-1. Phases of the coaching partnership.

111

Focus of the present study Protocol Component Orientation Session Session Session Closing Meeting 1 2-14 15 Meeting Preparation  Review planning forms and X X X X X gather resources Focused Observation  Observation -- X X X --  Data collection  Modeling and in-situ support Opening the Debrief X X X X X  Establish shared priorities Reflection  Share coach and teacher -- X X X X collected data (e.g., anecdotal notes, graphs, videos, checklists)  Collaborative reflection -- X X X X regarding the teacher’s observed implementation Performance-based Feedback  Supportive verbal and graphic X X X X X feedback  Constructive verbal and graphic feedback designed to -- X X X X enhance the teacher’s implementation fidelity  Goal-focused instruction and collaborative problem-solving -- X X X X regarding strategies for improving or maintaining implementation fidelity Targeted Support  Strengths and needs -- X X X X assessment  Shared goal setting and action planning X X X X X  Discuss project-based resources Closing  Review action steps -- X X X X  Acknowledge progress and thank the teacher

Figure 3-2. Coaching protocol components by session.

112

Goal Setting & Action Planning a. Definition: A verbal exchange initiated by the coach or teacher, in which 1) goals or action steps are identified, and revised AND/OR 2) the coach provides goal focused instruction designed to increase the teachers’ capacity to implement target practices in the future.

i. The coach and teacher may expand upon an existing Action Plan by adding a new action step.

Conversationii. Focus: Goal Setting and Action Planning Turn Teacher: I’m having a hard time figuring out what type 1 of support to provide for Taylor, when she is learning to Coach Verbal name letters. I think what I was doing was a good idea, Behavior: Questioning but I wanted to add some action steps around how to talk with her family and the other personnel on her Materials: team. I feel like I am just learning it myself and I’m Planning having trouble explaining it to everyone. Forms Turn Coach: Do you want to add an action step for looking at 2 how we can tweak the letters to meet the needs we’ve identified for Taylor? Turn Teacher: Yeah, I think that would be really helpful. 3 Explanation: Although the teacher identified a problem (how to share information) this would still be coded as Goal Setting and Action Planning, because the coach provided the solution rather than working with the teacher to generate two or more solutions. Therefore, the exchange does not meet the definition of problem solving.

Figure 3-3. A page from the coaching practices observation tool-pilot version 1.

113

Conversation Observed Interactions Verbal Focus Behavior

COACH: Our next step [on the action plan] was to make the Neutral center time matrix with your coach which we did, and then to statement practice implementing during center time and snack time. So, Probing tell me how did it feel today? Question

TEACHER: I felt like it was very doable. I like the matrix. I like where it’s at. I like how big it is. Actually, I actually like it. Teacher talk I think it’s in a good place.

COACH: And I saw you using it today! I saw you on – using it Supportive for two reasons one to try to remember what the targets were Verbal and two to take data! Feedback

TEACHER: And it felt really doable because it was in a place that was really easily accessible, even with somebody at the Teacher talk sand table it was still easy for me to get to

COACH: And if the sand table is an issue you could move it Neutral over to the door because that’s closed when the children are Statement in here pretty much, right?

Reflection TEACHER: Yea that’s true, but it was nice and big. It was and Feedback easy to use. It was easy to see and I think I had the one misunderstanding about the counting of objects. Cause I Teacher talk kept pressing him for seven and I didn’t realize it was up to seven until you told me it was okay to stop pushing seven.

COACH: Yeah, so any time you write up to like you did in Constructive this target that tells me it can be 7, but it can also be other Verbal sets as you work up to seven. Like four or five, because you Feedback may not always have exactly seven objects.

TEACHER: Oh yeah, well he was going great! He was like “un, two, eee, or, ive” and I was hearing numbers! I was so Teacher talk excited! COACH: Yeah, I hear two and three pretty clearly. Neutral Statement

TEACHER: Those are real clear and five. Teacher talk COACH: Yeah, I think the vowel sounds are easier for him. Neutral Statement

TEACHER: Yea and /v/. He does the /v/ sound good he does Teacher talk

COACH: Yeah, certainly he was engaged with the trains. He counted them with you and he had actually approximated Supportive Summarizing counting them on his own, when you were doing toileting Verbal after you practiced together. He said, “un, oo, un, oo” for 1, Feedback 2, 1, 2. So we know that activity is pretty motivating.

Figure 3-4 Coded transcript of a coaching debrief meeting.

114

Assessed for eligibility Excluded Coaches (n = 8) Coach left study after session 3 (n = 1) Teachers (n = 15) Teachers left study (n = 2) Videos (n = 210) Videos Phase 1 and Phase 5 (n = 22)

Teacher or Coach left the study (n = 19)

Excluded Assessed for Teachers intervention consistency Coaching did not occur as planned with and video data collected weekly interval (n = 2) Coaches (n = 7) Teachers (n = 11) Videos Videos (n = 169) Teacher not selected for sample (n = 66) Videos (n = 190) Missing and Incomplete video (n = 12)

Selected for Sampling Excluded Coaches (n = 7) Teachers (n = 7) Videos not randomly selected (n = 70) Videos (n = 91)

Sessions 1-5 Sessions 6 -10 Sessions 11-15

Coaches (n = 7) Coaches (n = 7) Coaches (n = 7) Teachers (n = 7) Teachers (n = 7) Teachers (n = 7) Selected Videos (n = 7) Selected Videos (n = 7) Selected Videos (n = 7)

Figure 3-5. Video sampling for present study.

115

CHAPTER 4 RESULTS

The primary purpose of the present study was to describe how coaches facilitate

PBC debrief conversations with preschool teachers by quantitatively characterizing the conversation focus, the extent to which coaches or teachers initiated the conversations, and coach verbal behavior. A secondary purpose was to explore if conversation focus, initiations by the coach versus the teacher, or coach verbal behavior, changed over time. As described in Chapter 3, data for the present study were (a) video-recorded practice-based coaching (PBC) debrief meetings, (b) two demographic questionnaires completed as part of a larger study by the teachers and coaches in the present study, and (c) information gathered about each coach-teacher dyad’s weekly sessions during year one the Tools for Teachers (TfT) study.

The study sample was comprised of 21 video-recorded debrief meetings, representing seven coach-teacher dyads at three times (i.e., occasion 1 = sessions 1-5, occasion 2 = sessions 6-10, occasion 3 = sessions 11-15; see Figure 3-5). Video recorded debrief meetings were coded using the Coaching Practices Observation Tool-

Research Version 1 (CPOT-RVI, Shannon & Snyder, 2016), which quantified the duration of conversation focus, number of coach and teacher initiations, and coach verbal behaviors.

In the present study, the following research questions were addressed:

1. What proportion of the coaching debrief time was spent in each conversation focus across all coaching debrief meetings?

1a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus?

2. Did the proportion of coaching debrief time, spent in each conversation focus, change over time?

116

2a. Did differences exist between dyads in the proportion of coaching debrief time spent in each conversation focus over time?

3. Of the total number of conversation initiations, what proportion were initiated by the coach versus the teacher?

3a. What proportion of each conversation foci was initiated by the coach versus the teacher?

3b. Did differences exist between dyads in the proportion of conversation foci initiations by the coach versus the teacher?

4. Did the proportion of coach versus teacher initiations change over time?

4a. Did differences exist between dyads in the proportion of conversation foci initiations over time?

5. At what rate was each coach verbal behavior used across all coaching debrief meetings?

5a. Did differences exist between coaches in the rate of verbal behaviors used?

6. At what rate were coach verbal behaviors used within each conversation foci?

6a. Did differences exist between coaches in the rate of verbal behaviors used within each conversation foci?

7. Did the rate of coach verbal behavior change over time?

7a. Did differences exist between coaches in the rate of verbal behaviors used over time?

8. What were the demographic characteristics of each dyad and were there differences in the coaching implementation, proportion of time spent in each conversation foci, coach and teacher initiations, and rate of coach verbal behavior relative to the overall sample?

Described below are participant demographics and interobserver agreement results for the CPOT followed by descriptions of findings for each of the study research questions.

Participant Demographics

Coach and teacher demographic data are shown in Tables 4-1 and 4-2, respectively. Data for the dyad’s coaching debrief meetings are shown in Table 4-3.

Coaches were white females between the ages of 28 to 65 years. The highest degree

117

held by most coaches was in Early Childhood Education or a related discipline. Coach

5’s degree was in Physical Education and Therapeutic Recreation. All of the coaches except for Coach 4 had some classroom experience with children birth to age 5; however, the amount of experience varied from 0-300 months (i.e., 30 years). Five coaches had prior experience as a coach or consultant, with three of those five having received training on adult learning principles prior to the study.

Teachers were white females between the ages of 23 and 54 years. Most teachers held a bachelor’s degree or higher in child development or early childhood education. Teachers 2 and 7, held a degree in Elementary Education. Only Teachers 2 and 3 held a degree in Special Education, yet all of the teachers were providing services to children with identified disabilities and individualized education programs

(IEPs). The teacher’s classroom experience with children ages birth to 5 ranged from 1 to 73 months (8 years) at the beginning of the study.

Coaching Implementation Variables

Coaches average implementation fidelity percentages for the debrief protocol in year one was high across all teachers who received coaching within the larger TfT study

(M = 91.4, SD = 5.8) and within the sample of 7 teachers selected for the present study

(M = 91.6, SD = 5.2). The average duration of coaching debrief meetings across sessions 1 to 15 in the selected sample of 7 teachers was 43 minutes (SD = 11.6, range

22-92). On average, dyads developed 4.3 action plans across the 15 weeks of coaching debrief meetings included in the present study, addressing 91.8% (SD = 13.3, range

64.3%- 100%) of the 14 key embedded instruction practices.

118

Interobserver Agreement

A secondary coder was trained by the student investigator on the CPOT-RVI

(Shannon & Snyder, 2016) to conduct interobserver agreement coding following the training methods described in Chapter 3. Twenty-one video recorded PBC debrief meetings were included in the present study sample. The secondary coder coded 48%

(n = 10) of these videos. Seven videos were randomly selected and coded for agreement. In addition, all three videos that included the student investigator as the coach were purposively selected and coded for reliability.

The mean interobserver agreement percentage score for duration of conversation foci codes averaged across the 10 sessions ranged from 71.0% to 100%.

The mean interobserver agreement percentage score for the coach versus teacher initiation codes averaged across the 10 sessions ranged from 55.7% to 100%. For the coach verbal behavior event codes, the mean interobserver agreement percentage score averaged across the 10 sessions coded ranged from 62.1% to 80.8%. The interobserver agreement for each CPOT-RVI code are shown in Table 4-4 and 4-5. The average Cohen’s kappa across the 10 coded sessions was 0.79 (range 0.56 to 0.93) and kappa for each coaching debrief meeting video is shown in Table 4-6

Conversation Focus

This section describes the results of the present study with respect to the proportion of time spent in each conversation focus. The CPOT-RVI is a continuous timed-event behavioral observation system. The system includes six duration codes, plus Uncodeable. Duration of conversation codes include: Personal/Pleasantries,

Coaching Process, Goal Setting and Action Planning, Problem-Solving, Reflection and

Feedback, and Summarizing.

119

Research Question 1: Proportion of Time Spent in Each Conversation Focus

Across all 21-coaching debrief sessions, 936 minutes of footage were coded.

Twenty-five minutes or 2.6% of the total time was Uncodeable and excluded from the analyses. The overall proportion of time for each conversation focus is shown in Table

4-7 and Figure 4-1. The majority of the coaching debrief session time was spent in

Reflection and Feedback (42.7%) and Goal Setting and Action Planning (33.6%), accounting for more than three-quarters of the total time. The percent of time spent in the remaining conversation foci was: Coaching Process (10.4%), Summarizing (6.2%),

Problem-Solving (5.9%), and Personal/Pleasantries (1.2%).

Research Question 1a: Coach Differences in Percent of Time Spent in Each Conversation Focus

Differences between coaches in the mean proportion of coaching debrief time spent in each conversation focus across all three sessions is shown in Table 4-8 and

Figure 4-2. Most coaches used all of the duration codes, except Dyad 5 who was never coded as engaging in a Problem-Solving exchange. For all dyads, more than two-thirds of the coaching debrief was spent in Reflection and Feedback M = 41.7% (SD = 6.8, range 29%-49%) and Goal Setting and Action Planning M = 32.8% (SD = 6.8, range

20.6%-39.5%). Dyads 4, 6, and 7 spent more than the overall mean proportion of time in Reflection and Feedback. Dyads 2, 3, 5, and 7 spent more than the overall mean proportion of time in Goal Setting and Action Planning. Dyad 2 spent the lowest proportion of time in Reflection and Feedback and Goal Setting combined at 64.7%, while Dyad 7 spent the highest proportion at 83.3%.

The proportion of time spent in the other conversation focus codes in order from most to least prevalent were: Coaching Process M = 10.8% (SD = 6, range 5.7%-

120

22.8%), Summarizing M = 6.7% (SD = 6.1, range 1%-16.8%), Problem-Solving M =

5.6% (SD = 4.1, range 0%-10.5%), and Personal/Pleasantries M = 1.2% (SD = 0.6, range 0.3%-1.9%). Dyads 2, 4, and 6 spent higher than average proportions of time in

Coaching Process. Dyads 1 and 4 spent more than two times the overall mean proportion of time in Summarizing. Dyads 4 and 5, spent less than the overall mean proportion of time in Problem-Solving. Dyads 1, 5, 6, 7 spent higher than the overall mean proportion of time in Personal Pleasantries.

Research Question 2: Proportion of Time Spent in Each Conversation Focus Across Occasions

The percentage of debrief time spent in each conversation focus across the coaching partnership was calculated by using the analytic procedures described in 1a three times. Data from these analyses are presented in Table 4-7 and Figure 4-2.

Across all three occasions, Reflection and Feedback or Goal Setting and Action

Planning represented the largest proportion of conversational time. There were consistent decreases in the proportion of time spent in Personal/Pleasantries, Coaching

Process, and Summarizing across the three occasions. In contrast, the proportion of time spent in Problem-Solving increased by five percent from 4.2% at occasion 1 to

9.3% at occasion 3.

Research Question 2a: Coach Differences in Proportion of Time Spent in Each Conversation Focus Across Occasions

The proportion of time in each conversation focus across three occasions were calculated for each dyad. These data are shown in Table 4-9. Participation in Personal

Pleasantries conversations was inconsistent across dyads, with dyad 2 and 3 participating in little to no Personal Pleasantries. For Dyads 5 and 7, a steady decline in the proportion of time spent in Personal Pleasantries is evident. Coaching Process

121

conversations steadily decreased across time for Dyads 2, 3, 4, and 7; however, Dyad 1 showed a steady increase in Coaching Process conversations from time 1 to time 3.

There was variability in the proportion of time dyads spent in Goal Setting and Action

Planning, but it was used by all dyads at all time points. Only Dyad 3 showed a consistent trend over time, by decreasing from 59.2% at occasion 1 to 20% and 11.1% at occasion 2 and 3, respectively. Problem-Solving conversation was a consistent feature of the coaching debrief meeting for Dyad 7, appearing at all three occasions, and increasing from 3.3% at occasion 1 to 13.9% at occasion 3. Problem-Solving conversations never occurred for Dyad 5. Dyads 3 and 6 engaged in Problem-Solving conversation on two occasions, while Dyads1, 2, and 4 only engaged in Problem-

Solving on one occasion. Similar to Goal Setting and Action Planning, Reflection and

Feedback conversations were used by all dyads on each occasion; however, the proportions are variable. Dyad 3 showed a consistent increase in Reflection and

Feedback, while Dyad 7 showed a consistent decrease across occasions. No other dyads showed consistent trends in the proportion of time spent in Reflection and

Feedback. Dyads 1, 3, 4, and 5, consistently engaged in the Summarizing conversation foci for some portion of their coaching debrief meetings across occasions. The only consistent decrease in Summarizing appeared in Dyad 5 and no dyads exhibited consistent increases.

Research Question 3: Coach versus Teacher Initiations

This section describes the results of the present study with respect to observed coach versus teacher initiations (Table 4-10). The initiation code is a binary forced response code following the onset of one of the six duration codes. It was used to identify who within the dyad initiated the onset of a new conversational focus.

122

Uncodeable did not receive a coach or teacher initiation code. The onset of conversation foci was recorded on 365 occasions, across all 21 observed and coded coaching debrief sessions. The overall percentage of coach and teacher initiations was

77% and 23%, respectively.

Research Question 3a: Proportion of Coach and Teacher Initiations by Conversation Focus

The analytic procedure used for Research Question 3 was repeated for each conversation focus. Results of these analyses are shown in Table 4-10. Coaches were, on average, four-times more likely to initiate Personal/Pleasantries, Coaching Process, and Goal Setting and Action Planning. Coaches were, on average, two-times more likely to initiate Reflection and Feedback conversations. In contrast to all other conversation foci, Problem-Solving had an equal distribution of initiations across coaches and teachers. Using the operational definitions outlined in the CPOT-RVI coding system, the initiation data showed coaches initiated all Summarizing statements.

Research Question 3b: Coach Differences in the Proportion of Coach and Teacher Initiations by Conversation Focus

Differences in coach and teacher initiations by conversation focus are shown in

Table 4-11. Personal/Pleasantries was initiated more often by the coach across all seven coach-teacher dyads. Coaches 3, 5, and 6 initiated 100% of Personal/

Pleasantries, while Dyad 4 had an equal number of coach and teacher initiations.

Coaching Process was initiated more often by the coach across all seven coach-teacher dyads. Coaches 3 and 5 initiated 100% of Coaching Process and Coach 7 had the lowest percentage (62.5%) of initiations compared to the other dyads. Goal Setting and

Action Planning was initiated more often by the coach across all seven coach-teacher dyads. Coaches 4 and 5 initiated 100% of Goal Setting and Action Planning and Coach

123

6 had the lowest percentage (55.6%) of Goal Setting and Action Planning initiations compared to the other dyads. Problem-Solving was initiated more often by the coach in four of seven coach-teacher dyads. Dyad 5 never engaged in Problem-Solving, Dyad 6 had an equal number of coach and teacher Problem-Solving initiations, and, in Dyad 7, the teacher initiated 100% of the Problem-Solving conversations. Reflection and

Feedback was initiated more often by the coach in six of the seven coach-teacher dyads. Dyad 6 had an equal number of coach and teacher Reflection and Feedback initiations. Coaches initiated 100% of the Summarizing Statements in alignment with the

CPOT-RVI manual.

Research Question 4: Proportion of Coach and Teacher Initiations Across Occasions

Change in the proportion of coach and teacher initiations across the coaching partnerships are shown in Table 4-12. The overall percentage of coach initiated conversation foci decreased consistently across occasions from 83.2% at occasion 1 to

70.4% at occasion 3, while the percentage of teacher initiated exchanges increased from 16.8% at occasion 1 to 29.6% at occasion 3.

Research Question 4a: Proportion of Coach and Teacher Initiations Across Time by Conversation Focus

Change in the proportion of coach and teacher initiations across the coaching partnership by conversation focus are shown in Table 4-12. Teachers’ proportions of initiations related to Personal Pleasantries, Coaching Process, Goal Setting and Action

Planning, Problem-Solving, and Reflection and Feedback showed an increase from occasion 1 to occasion 3. The smallest increase in teacher initiations from occasion 1

(20.8%) to occasion 3 (23.5%) was related to Goal Setting and Action Planning. The

124

largest increase in teacher initiations from occasion 1 (25%) to occasion 3 (57.1%) was related to Problem-Solving.

Research Question 4b: Coach Differences in the Proportion of Coach and Teacher Initiations Across Occasions

Differences in coach and teacher initiations across the three occasions are shown in Table 4-13. The coach initiated more than 80% of the conversation foci in five out of seven coach-teacher dyads (i.e., Dyads 1, 2, 3, 4, 5). Dyad 2 had the highest proportion of coach initiations (87.8%), while Dyad 7 had the lowest proportion of coach initiations (63.1%). A consistent decrease in coach initiations and increase in teacher initiations over occasions were demonstrated for Dyads 1, 4, and 6. Although there was not a consistent trend, there were also decreases in coach initiations and increases in teacher initiations from occasion 1 to occasion 3 in Dyads 2, 5, and 7. Dyad 3 had a small increase in coach initiations and decrease in teacher initiations from occasion 1

(17.6%) to occasion 3 (16.7%). The largest change (31.2%) in coach and teacher initiations from occasion 1 to occasion 3 occurred in Dyad 7. This was the only dyad where teacher initiations exceeded coach initiations. At time 3, Coach 7 initiated 42.9% of the conversation foci while Teacher 7 initiated 57.1% of the conversation foci.

Coach Verbal Behavior

This section describes the results of the present study with respect to the rate of coach verbal behavior per 30 minutes. Across all 21 sessions, there were 921 minutes of video debrief time included in these analyses; 25 minutes of Uncodeable time were excluded. The CPOT-RVI includes eight verbal behavior event codes: Neutral

Statements, General Praise and Agreement, Supportive Verbal Feedback, Constructive

125

Verbal Feedback, Instructional Statements, Clarifying Questions, Probing Questions, and Demonstration.

Research Question 5: Rate of Coach Verbal Behavior

The average rate of coach verbal behavior per 30 minutes is shown in the overall column of Table 4-14. Coaches used Neutral Statements most often, making on average 61.2 Neutral Statements per 30 minutes. Coaches used Clarifying Questions,

General Praise and Agreement, and Instructional Statements 13.5, 10.0, and 9.5 times per 30 minutes, respectively. In contrast, four of the verbal behavior codes were used on average at a rate of less than 5 per 30 minutes. These verbal codes were Probing

Questions (4.5), Supportive Verbal Feedback (2.8), Constructive Verbal Feedback (3.1), and Demonstration (2.0).

Research Question 5a: Coach Differences in Rate of Verbal Behavior

Differences in coaches’ rate of verbal behavior per 30 minutes are shown in

Table 4-15. The mean number of verbal behaviors demonstrated was 104.3 (SD =

23.5). Coaches 2 and 4 exhibited the highest rates of verbal behavior with 132.9 and

133.0 verbal behaviors per 30 minutes. The lowest rate of coach verbal behavior was seen in Coach 6 with 72.0 verbal behaviors per 30 minutes.

All coaches used more Neutral Statements per 30 min than any other type of verbal behavior M = 59.7 (SD = 11.9). Required verbal behaviors associated with the implementation of the protocol were demonstrated by all coaches; the rate per 30 minutes was: Supportive Verbal Feedback, M = 2.7 (SD = 1.2); Constructive Verbal

Feedback, M = 3.0 (SD = 1.9); and Clarifying Questions, M = 13.9 (SD = 6.4).

Supportive Verbal Feedback occurred on average less than five times per 30 minutes for all coaches. General Praise and Agreement, a similar, but less specific type of

126

positive feedback, occurred nearly 10 times on average (M = 9.9, SD = 4.0) within 30 minutes. Similarly, Constructive Verbal Feedback occurred, on average, three times (M

= 3.0, SD = 1.9) within 30 minutes for all coaches. Instructional Statements designed to teach or inform, but not explicitly connected to improving something observed in the classroom, occurred on average nearly nine times (M = 8.9, SD = 5.1) within 30 minutes for all coaches except Coach 3. Coach 3 rarely used instructional statements. Most coaches implemented Clarifying Questions 13 or more times on average (M = 13.9, SD

= 6.4) within 30 minutes, except Coaches 6 and 7 who used fewer Clarifying Questions.

Probing Questions occurred at a lower rate than Clarifying Questions, occurring less than five times (M = 13.9, SD = 6.4) within 30 minutes for five out of seven coaches.

Only Coaches 1 and 7 exhibited more than five Probing Question per 30 minutes.

Demonstration occurred at the lowest rate on average (M = 1.7, SD = 2.0) and never occurred for Coach 6.

Research Question 6: Rate of Coach Verbal Behavior Across Occasions

The rate of coach verbal behavior across the three occasions sampled were analyzed to evaluate whether changes were evident. The results are shown in Table 4-

14 and Figure 4-4. Most verbal behavior occurred at a fairly consistent rate across occasions with rate differences of less than five verbal behaviors per 30 minutes.

Neutral Statements (occasion 1 = 55.0, occasion 2 = 66.4, occasion 3 = 61.2) was the only code where differences greater than five verbal behaviors per 30 minutes occurred.

A small decrease in the rate of coach verbal behavior from occasion 1 to occasion 3 occurred in Supportive Verbal Feedback, and Clarifying Questions, while a small increase in the rate of coach verbal behavior occurred in General Praise and

Agreement, Constructive Verbal Feedback, Instructional Statements, and

127

Demonstration. Probing Questions remained constant from occasion 1 to occasion 3, but there was a decrease at occasion 2.

Research Question 6a: Coach Differences in Rate of Verbal Behavior Across Occasions

Differences in coaches’ rate of verbal behavior per 30 minutes across occasions are shown in Table 4-16. The average rate of coach verbal behavior per 30 minutes increased from occasion 1 (M = 99.1, SD = 25.6) to occasion 3 (M = 119.6, SD =

39.66); however, sizable decreases in verbal behavior were observed Coach 5

(occasion 1 = 117.9, occasion 3 = 81.3).

Research Question 7: Rate of Coach Verbal Behavior by Conversation Focus

The rate of each verbal behavior per 30 minutes within each conversation focus was calculated and the results of these analyses are shown in Table 4-17 and

Figure 4-5.

Neutral Statements occurred at the highest rate per 30 minutes within each conversation focus and the highest rate of Neutral Statements (160.9) occurred within

Personal Pleasantries. In contrast, Constructive Verbal Feedback, Instructional

Statements, and Demonstrations never occurred within Personal Pleasantries. General

Praise and Agreement (PA) and Supportive Verbal Feedback (SF) occurred within all conversation foci and most often during Reflection and Feedback (PA = 15.2 and SF =

3.9), Summarizing (PA = 11.8 and SF = 12.9), and Personal Pleasantries (PA = 10.9 and SF = 2.7). Similarly, Constructive Verbal Feedback occurred often during Reflection and Feedback (5.6) and Summarizing (2.7), but also occurred frequently during

Problem-Solving (2.8). Instructional Statements most often occurred within Problem-

Solving (18.3) and Goal Setting and Action Planning (14.1) and Reflection and

128

Feedback (8.0). Clarifying or Probing Questions occurred within all conversation foci.

Clarifying Questions occurred most often during the Coaching Process (30.0), while

Probing Questions tended to occur most often within Goal Setting and Action Planning

(6.4). Demonstrations occurred at the lowest rate per 30 minutes overall and occurred most often during Problem-Solving (0.6).

Research Question 7b: Coach Differences in Rate of Verbal Behavior by Conversation Focus

Each coach’s rate of verbal behavior per 30 minutes within each conversation foci was calculated to determine whether coaches used verbal behaviors at different rates within the conversation foci. The results are shown in Table 4-18 through Table 4-

24. These tables are discussed in more detail in the next section.

Coach-Teacher Dyads

A primary purpose of the study was to explore and describe coach-teacher interactions. In an effort to better understand how coach behaviors may influence the teacher’s experience of the PBC framework and his or her exposure to the embedded instruction practice-focus of their coaching interactions, this section describes the results of the present study for each coach-teacher dyad with regard to conversation focus, initiations, rate of verbal behavior, and select implementation fidelity variables.

Although each dyad engaged in a transactional, collaborative partnership, results from these dyadic analyses emphasize the actions of the coach as the partner responsible for facilitating the coaching process during the debrief meeting and implementing the embedded instruction coaching protocol as intended.

Seven unique coach-teacher dyads were selected for data analysis. In addition to the coded videotapes, two demographic questionnaires were completed as part of the

129

larger study by the teachers and coaches, and action plans and fidelity information were gathered about each coach-teacher dyad’s weekly sessions during year one of the TfT study. Data for coach demographics, teacher demographics, and weekly coaching implementation can be found in Tables 4-1, 4-2, and 4-3, respectively.

Coach and Teacher Dyad 1

Coach 1 was a 28-year-old white female, with a master’s degree in education for

English language learners (ELL). She had 18 months of experience in the classroom with children birth to 5 and had prior training on adult learning and experience providing classroom consultation and coaching. In the present study, coach 1 was paired with

Teacher 1, a 26-year-old white female who had a bachelor’s degree in early childhood and 29 months of experience in the classroom with children birth to five at the beginning of the study.

Across all three teachers she coached in the larger study, Coach 1’s implementation fidelity of the debrief protocol was 92.2% (SD = 4.2). Her implementation fidelity of the debrief protocol with Teacher 1 was 95% (SD = 1.9), slightly higher and more consistent than her fidelity average in the larger study. Dyad

1’s coaching debrief meetings lasted, on average, 37 minutes (range = 22-50), consistent with her duration for all three teacher’s coached in the larger study. From session 1 to session 15, Dyad 1 developed five action plans addressing 64.3% of the 14 embedded instruction practices. The number of action plans developed was slightly higher than average (M = 4.3) across teachers in the present study, while the percent of practices addressed was the lower than any other dyad.

On average, Coach 1 was the primary initiator of the conversation focus (M

=80.4%); however, Teacher 1 showed a steady increase in her percentage of

130

conversation foci initiations increasing from 5% at time 1 to 30% at time 3 (Table 4-11).

As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in Reflection and Feedback (40.0%) and Goal Setting and

Action Planning (31.6%), with the coach using all of the verbal behaviors to elicit participation in these conversation foci. On average, this dyad spent three times as much time in Summarizing (16.8%), and slightly more time in Personal/Pleasantries

(1.8%) when compared to other dyads. The proportional increases in Summarizing and

Personal/Pleasantries resulted in slightly lower than average proportions for Coaching

Process (6.5%) and Problem-Solving (3.3%) when compared to other dyads.

Table 4-18 illustrates Coach 1’s verbal behavior relative to the conversation focus. The coach provided her highest rate of Supportive Verbal Feedback at a rate of

14.5 instances within 30 min during Summarizing a, coach-talk-only code, and her highest rate of Constructive Verbal Feedback at a rate of 16.1 instances within 30 minutes during Problem-Solving. However, Coach 1 provided less Constructive Verbal

Feedback, on average, compared to the other coaches. This coach had a high rate of questioning, overall, but the distribution of these verbal behaviors across conversation foci reveal that this coach used Clarifying and Probing Questions, 60.14 and 15.04 within 30 minutes, at her highest rate during Personal/Pleasantries. A conversation focus that may have facilitated the collaborative partnership, but was not related to the practice-focus of coaching.

Coach and Teacher Dyad 2

Coach 2 was a 31-year-old white female, with a master’s degree in early intervention. She had 9 months of experience with children birth to age five and had prior training on adult learning, but had not previously provided consultation and

131

coaching. She coached one teacher in the larger study and her implementation fidelity of the debrief protocol was 96.3% (SD = 3.3). Coach 2 was paired with Teacher 2, a 23- year-old white female who had a bachelor’s degree in elementary education and a master’s degree in special education and 20 months of experience in the classroom with children birth to age 5 at the beginning of the study.

Dyad 2’s coaching debrief meetings lasted 33 minutes on average (range = 25-

50). From session 1 to session 15, Dyad 2 developed 4 action plans addressing 100% of the 14 embedded instruction practices. The number of action plans developed was well aligned with the overall average (M = 4.3) in the larger study, while the percent of practices addressed was higher than the other dyads in the present study.

On average, Coach 2 was the primary initiator of the conversation focus (M

=87.8%); however, Teacher 2 showed a small increase in her proportion of conversation focus initiations increasing from 12.5% at time 1 to 14.8% at time 3 (Table 4-11). As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in Reflection and Feedback (29%) and Goal Setting and

Action Planning (35.7%). On average, this dyad spent the most time in Coaching

Process (22.8%) and a higher proportion of time in Problem-Solving (9.9%) when compared to the other dyads. Only 2.7% of the time was spent in Summarizing (2.1%) and Personal/Pleasantries (0.6%), which was lower than the other dyads on average.

Coach 2 provided Supportive Verbal Feedback and Constructive Verbal

Feedback at a higher rate, on average, than the other coaches (Table 4-15). Table 4-19 illustrates Coach 2’s verbal behavior relative to the conversation focus. She provided her highest rate of General Praise and Agreement, Supportive Verbal Feedback, and

132

Constructive Verbal Feedback, all at a rate of 26.1 instances within 30 min during

Summarizing, a coach-talk-only code. Coach 2 had the highest rate of Clarifying

Questions (21.9 per 30 min) of all the coaches, and she used these questions across all conversation foci except Personal Pleasantries. Coach 2's Probing Questions occurred at a lower rate, on average, compared to the other coaches. When she did implement

Probing Questions, it was during Reflection and Feedback (4.67 per 30 min) and Goal

Setting and Action Planning (3.79 per 30 min). Instructional Statements were often used by Coach 2 (15.5 per 30 minutes), occurring at a higher than average rate when compared to the other coaches, and she used them at the highest rate per 30 minutes during Problem-Solving (35.6), Goal Setting and Action Planning (20.5), and Reflection and Feedback (16.8). She rarely used Demonstration, 0.3 times per 30 minutes.

Coach and Teacher Dyad 3

Coach 3 was a 28-year-old white female, with a master’s degree in early childhood special education. She had 20 months of experience with children birth to age

5 and had previously provided consultation and coaching, although she had not received training on the principles of adult learning prior to the current study. Coach 3 was paired with Teacher 3, a 27-year-old white female who had a bachelor’s degree in communication and American Sign Language and a master’s degree in early childhood special education. Teacher 3 had 72 months of experience in the classroom with children birth to 5 at the beginning of the study.

Coach 3 coached two teachers in the larger study and her implementation fidelity of the debrief protocol was 93.0% (SD = 4.6) across both teaches coached in the larger study and 94.0% (SD = 4.2) with Teacher 3. Dyad 3’s coaching debrief meetings lasted

34 minutes on average (range = 13 - 71). From session 1 to session 15, Dyad 3

133

developed 5 action plans addressing 85.7% of the 14 embedded instruction practices.

The number of action plans developed was above the overall average in the present study of 4.3, while the percent of practices addressed was lower than the overall average of the other dyads in the present study of 91.8%.

On average, Coach 3 was the primary initiator of the conversation focus (M =

82.1%). Teacher 3 showed an increase in her percentage of conversation focus initiations from time 1 (17.6%) to time 2 (20.0%); however, at time 3 her percent of initiations decreased to less than time 1 at 16.7%. As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in

Reflection and Feedback (40.9%) and Goal Setting and Action Planning (39.1%). This dyad spent the most time in Problem-Solving (10.5%) when compared to the other dyads, on average (M = 5.6). Higher rates of Reflection and Feedback, Goal Setting, and Problem-Solving left only 9.5% of the total debrief time for Coaching Process

(6.1%), Summarizing (3.1%), and Personal/Pleasantries (0.3%), which was lower than the other dyads on average, M = 10.8, M = 6.7, M = 1.2 respectively.

Coach 3 was more likely to use General Praise and Agreement than Supportive

Verbal Feedback and used a lower than average rate of Supportive Verbal Feedback when compared to the other coaches; 1.9 versus M = 2.7 (SD = 1.2) per 30 minutes

(Table 4-15). Table 4-20 illustrates Coach 3’s verbal behavior relative to the conversation focus. She provided her highest rate of General Praise and Agreement and Supportive Verbal Feedback, both at a rate of 22.8 verbal behaviors per 30 min during Summarizing, a coach-talk-only code. Her highest rate of Constructive Verbal

Feedback occurred during Reflection and Feedback at 1.9 verbal behaviors per 30

134

minutes and she did not provide Constructive Verbal Feedback during any other conversation focus. Coach 3 provided less Constructive Verbal Feedback on average compared to the other coaches (M = 3.0, SD = 1.9). This coach had a higher than average rate of Clarifying Questions (21.5 per 30 minutes) compared to the other coaches (M = 13.9, SD = 6.4), and she used them across all conversation foci except

Personal Pleasantries and Summarizing. She also implemented Probing Questions and

Instructional Statements during Problem-Solving, Goal Setting and Action Planning, and

Reflection and Feedback. However, they occurred less than only 2.3 per 30 minutes.

This is consistent with her lower than average rate of Probing Questions (2.3) and

Instructional Statements (2.3), when compared to the other coaches (M = 4.3, SD = 2.7,

M = 8.9, SD = 5.1 respectively). Coach 3 rarely used Demonstration, only 0.3 times per

30 minutes, and it was only used in the context of Goal Setting and Action Planning.

Coach and Teacher Dyad 4

Coach 4 was a 59-year-old white female, with a master’s degree in education.

She did not have direct experience with children birth to age five, as a teacher, assistant, therapist, or intern. Prior to the current study, she had provided consultation and coaching and had received training on principles of adult learning. She coached three teachers in the larger study; however, only one of the teachers completed the 15 weeks of coaching. For the present study, Coach 4 was paired with Teacher 4, a 40- year-old white female who had a bachelor’s degree in early childhood education.

Teacher 4 had 1 month of experience in the classroom with children birth to age 5 at the beginning of the study.

Coach 4’s implementation fidelity of the coaching debrief protocol with Teacher 4 was 92.0% (SD = 4.0), slightly lower than her average fidelity (93.1%, SD = 4.0) across

135

all three teachers with whom she was assigned to work during the larger study. Dyad

4’s coaching debrief meetings lasted 46 minutes on average (range 29-60). From session 1 to session 15, Dyad 4 developed five action plans, slightly more than the average of other dyads in the present study (M = 4.3). Their action plans addressed

100% of the 14 embedded instruction practices, more than the other dyads (M =

91.8%).

On average, Coach 4 was the primary initiator of the conversation focus (M =

81.4%). Teacher 4 showed an increase in her percentage of conversation focus initiations from time 1 (15.8%) to time 2 and 3 (20.0%) (Table 4-13). As shown in Table

4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in Reflection and Feedback (48.9%) and Goal Setting and Action Planning

(20.6%). Compared to the other dyads, however, this dyad spent the second highest percentage of time in Reflection and Feedback, 7.2% higher than the average, and the lowest percentage of time in Goal Setting and Action Planning, 12.2% lower than the average. This dyad has the second highest percentage of time in Summarizing a coach- talk-only code (13.8%) and Coaching Process (14.3%) when compared, on average, to the other dyads. Relatively little time was spent in Problem-Solving (1.6%) and

Personal/Pleasantries (0.8%), which was lower than the other dyads on average.

Coach 4 had the highest rate of neutral statements compared to the other coaches (75.9 versus M = 59.7 per 30 minutes). She used General Praise and

Agreement (18.1) at a higher rate per 30 minutes, than Supportive Verbal Feedback

(2.4). Her rate of Supportive Verbal Feedback was slightly lower when compared to the other coaches (M = 2.7, SD = 1.2). Table 4-21 illustrates Coach 4’s verbal behavior

136

relative to the conversation focus. She provided her highest rate of General Praise and

Agreement, 28.1 verbal behaviors per 30 minutes, during Reflection and Feedback, but also provided General Praise and Agreement during Summarizing (18.5), Goal Setting and Action Planning (9.9), and Coaching Process (3.6). Similarly, Coach 4, also used

Supportive Verbal Feedback at her highest rates during Summarizing (5.5) and

Reflection and Feedback (3.6). Constructive Verbal Feedback was only offered during

Reflection and Feedback at a rate of 9.9 verbal behaviors per 30 minutes. Coach 4 had a higher than average rate of Constructive Verbal Feedback, compared to other coaches.

All other verbal behaviors were aligned with the average rate of verbal behavior across the coaches, with Demonstration occurring at a slightly higher than average rate compared to the other coaches (2.9 versus M = 1.7, SD = 2.0). Demonstration was used by Coach 4 at a rate of 5.7 and 1.2 verbal behavior per 30 minutes respectively during Reflection and Feedback and Goal Setting and Action Planning. Although the overall rate of questioning was aligned with the other coaches, on average, Coach 4 engaged in the highest rate of Probing (16.06) and Clarifying Questions (39.26) during

Coaching Process, compared to all other coaches. This latter finding was not surprising, given this dyad spent the largest proportion of time in Coaching Process.

Coach and Teacher Dyad 5

Coach 5 was a 65-year-old white female, with a bachelor’s degree in physical education and therapeutic recreation. She had 300 months (30 years) of experience with children birth to 5, more than any other coach. Prior to the current study, she had not provided consultation and coaching, but had received training on principles of adult learning. Coach 5 was paired with Teacher 5, a 27-year-old white female who had a

137

bachelor’s degree in early childhood and family studies. Teacher 5 had 73 months of experience in the classroom with children birth to 5 at the beginning of the study.

Coach 5 coached two teachers in the larger study. Her implementation fidelity of the debrief protocol, on average, in the larger study was 88.1% (SD = 8.1). This was lower than the other coaches on average (M = 91.8, SD 13.3). Coach 5’s implementation fidelity of the coaching debrief protocol with Teacher 5 was 88.8% (SD =

3.7), slightly higher and more consistent than her average fidelity across both teachers with whom she was assigned to work. Dyad 5’s coaching debrief meetings lasted 36 minutes, on average (range 26-45). From session 1 to session 15, Dyad 5 developed 4 action plans consistent with the mean of 4.3 for dyads in the present study. The action plans addressed 100% of the 14 embedded instruction practices, higher than the other dyads (M = 91.8, SD = 13.3).

On average, Coach 5 was the primary initiator of the conversation focus (M

=83.3%). Teacher 5 showed an increase in her proportion of conversation focus initiations from time 1 (0%) to time 3 (16.7%, Table 4-12). As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in

Reflection and Feedback (40.1%) and Goal Setting and Action Planning (35.4%). This dyad was fairly consistent with the other dyads, on average for all duration of conversation focus codes except Problem-Solving, which they never engaged in during the three sessions coded.

Coach 5’s rate of verbal behavior per 30 minutes across all verbal behaviors was within 4 behaviors from the mean of the other coaches. She was more likely to use

General Praise and Agreement (10), than Supportive Verbal Feedback (1.8) and used a

138

slightly lower than average rate per 30 minutes of Supportive Verbal Feedback when compared to the other coaches (M = 2.7, SD = 1.2). Table 4-22 illustrates Coach 5’s verbal behavior relative to the conversation focus. She provided her highest rate of

General Praise and Agreement, 32.2 verbal behaviors per 30 minutes, during

Personal/Pleasantries, but also provided General Praise and Agreement during

Reflection and Feedback (17.3), Goal Setting and Action Planning (5.8), Summarizing

(5.3), and Coaching Process (2.7). Coach 5used Supportive Verbal Feedback (SF) and

Constructive Verbal Feedback (CF) at her highest rates during Summarizing (SF = 15.8,

CF = 10.5). Both Supportive Verbal Feedback and Constructive Verbal Feedback were also used during Reflection and Feedback (SF = 1.3, CF = 3.2), but at much lower rates than during Summarizing. Clarifying Questions for Coach 5 occurred across a variety of conversation foci, but occurred at their highest rate per 30 minutes during

Personal/Pleasantries (32.3) a conversational focus unrelated to the practice-focus. In contrast, Probing Questions occurred at a higher rate per 30 minutes during Reflection and Feedback (4.5) and Goal Setting and Action Planning (3.6). The use of

Demonstration for Coach 5 was lower than average compared to the other coaches (0.5 versus M = 1.7, SD = 2.0) and was used by Coach 5 most often during Summarizing

(5.3) and Coaching Process (2.7).

Coach and Teacher Dyad 6

Coach 6 was a 63-year-old white female, with a doctorate in early intervention.

She had 174 months of experience working with children birth to age 5. Prior to the current study, she had provided consultation and coaching, but had not received training on principles of adult learning. She coached two teachers in the larger study.

For the present study, Coach 6 was paired with Teacher 6, a 47-year-old white female

139

who had a master’s degree in early childhood education. Teacher 6 had 42 months of experience in the classroom with children birth to age 5 at the beginning of the study.

Coach 6’s implementation fidelity of the debrief protocol was 89.8% (SD = 5.1) across both teachers in the larger study. Coach 6’s implementation fidelity of the coaching debrief protocol with Teacher 6 was 87.9% (SD = 4.7), slightly lower and less than her average fidelity across both teachers with whom she was assigned to work.

Dyad 6’s coaching debrief meetings lasted 50 minutes on average (range = 40-60).

From session 1 to session 15, Dyad 6 developed 3 action plans, less than the other dyads in the present study, on average (M = 4.3). Dyad 6 addressed 92.9% of the 14 embedded instruction practices, which was consistent with the other dyads, on average

(M = 91.8).

Coach 6 was the primary initiator of the conversation focus (M =67.3%). The teacher initiated conversation focus was 32.7%, this was the second highest rate of teacher initiated conversation foci compared to the other dyads. Teacher 6 showed a slight increase in her proportion of conversation focus initiations from time 1 (31.3%) to time 2 and 3 (33.3%, Table 4-13). As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in Reflection and Feedback

(49.0%) and Goal Setting and Action Planning (27.9%). This dyad spent the highest percentage of time in Reflection and Feedback, 7.3% higher than the average (M =

41.7%, SD = 6.8), and the second lowest percentage of time in Goal Setting and Action

Planning (27.9%), 4.9% lower than the average (M = 32.8%, SD = 6.8). Other conversation foci were fairly consistent with the other coaches, on average.

140

Coach 6 had the lowest rate of coach verbal behavior overall at a rate of 72 verbal behaviors per 30 minutes, indicative of fewer teacher turns and longer utterances of a single type of verbal behavior. All rates of verbal behavior are lower than the average, and Demonstration was never used. She used General Praise and Agreement

(9.4) at a higher rate per 30 minutes than Supportive Verbal Feedback (1.2), and used a slightly lower than average rate of Supportive Verbal Feedback when compared to the other coaches (M = 2.7, SD = 1.2, Table 4-15). Table 4-23 illustrates Coach 6’s verbal behavior relative to the conversation focus. She provided her highest rate of General

Praise and Agreement, 15.9 verbal behaviors per 30 minutes, during Summarizing, but also provided General Praise and Agreement during Reflection and Feedback (10.8),

Problem-Solving (10.1), Goal Setting and Action Planning (9.5), and Coaching Process

(1.9). Coach 6used Supportive Verbal Feedback at her highest rate during Summarizing

(4.0) and Reflection and Feedback (2.1). Summarizing is a coach-talk-only code.

Constructive Verbal Feedback (CF) and Instructional Statements (IS) were used often during Problem-Solving (CF = 6.7, IS = 6.7) and occasionally during Reflection and Feedback (CF = 1.2, IS = 5.4). In addition, Instructional Statements occurred at their highest rate per 30 minutes during Goal Setting and Action Planning (11.7). Coach

6 asked both Clarifying Questions (CQ, 4.4) and Probing Questions (PQ, 1.2) at lower rates per 30 minutes than the other coaches, on average (CQ M = 13.9, SD = 6.4; PQ

M =4.3, SD = 2.7), but used them strategically when they were implemented, during

Reflection and Feedback, Goal Setting and Action Planning, and Coaching Process.

Coach and Teacher Dyad 7

Coach 7 was a 31-year-old white female, with a master’s degree in reading and reading disabilities. She had 66 months of experience with children birth to 5. Prior to

141

working as a coach in the larger study, she had provided consultation and coaching and received training on principles of adult learning. She coached two teachers as part of the larger study. Coach 7 was paired with Teacher 7 a 54-year-old white female who had a bachelor’s degree in elementary education. Teacher 7 had 68 months of experience in the classroom with children birth to age 5 at the beginning of the study.

Coach 7’s implementation fidelity of the debrief protocol was 90.4% (SD = 6.6) across both teachers in the larger study. Coach 7’s implementation fidelity of the coaching debrief protocol with Teacher 7 was 90.4% (SD = 8.0), nearly identical to her average fidelity across both teachers with whom she was assigned to work. Dyad 7’s coaching debrief meetings lasted 65-minutes on average (range 35-92), longer than any other dyad and inconsistent with the TfT coaching protocol. From session 1 to session

15, Dyad 7 developed 4 action plans, consistent, on average with other dyads (M = 4.3).

The action plans addressed 100% of the 14 embedded instruction practices, more than other dyads, on average (M = 91.8%).

On average, Coach 7 was the primary initiator of the conversation focus (M

=63.1%). However, this dyad had the highest rate of teacher-initiated conversation foci compared to the other dyads. Teacher 7 showed a large increase in her percentage of conversation focus initiations from occasion 1 (25.9%) to time 3 (57.1%); however, teacher initiations decreased to 10% at time 2 (Table 4-11). As shown in Table 4-8, this dyad, similar to the other dyads, spent the majority of the coaching debrief session in

Reflection and Feedback (RF, 43.8%) and Goal Setting and Action Planning (GS,

39.5%). These percentages were slightly higher, on average, than the other coaches

(RF M = 41.7, SD = 6.8; GS M = 32.8, SD = 6.8). Problem-Solving occurred at a higher

142

than average percentage (8.1 versus M = 5.6, SD = 4.1), but all other conversation foci were lower than other coaches. Coach 7 was the only coach to engage in

Personal/Pleasantries and Problem-Solving across all three-observed coaching debrief meetings. Summarizing was the only conversation focus used inconsistently across the three occasions and only occurred at occasion 1.

Coach 7 had a lower than average rate per 30 minutes of verbal behavior for

Praise and Agreement (6.5 versus M = 9.9, SD = 4) and Clarifying Questions (8.2 versus M = 13.9, SD = 6.4). All other verbal behaviors were used at an above average rate. Coach 7 was more likely to use General Praise and Agreement, than Supportive

Verbal Feedback; but, used a slightly lower than average rate of General Praise and

Agreement and a slightly higher rate of Supportive Verbal Feedback, when compared to the other coaches. Table 4-24 illustrates Coach 7’s verbal behavior relative to the conversation focus. Coach 7 provided her highest rate of General Praise and

Agreement, 16.7 verbal behaviors per 30 minutes, during Personal/Pleasantries, but also provided General Praise and Agreement during Reflection and Feedback (9.3),

Problem-Solving (3.9), Goal Setting and Action Planning (4.4), and Coaching Process

(2.8). Coach 7 used Supportive Verbal Feedback at her highest rate during

Summarizing, a coach-talk-only code (7.7), but also used Supportive Verbal Feedback during Personal/Pleasantries (8.3), and Reflection and Feedback (6.8). Constructive

Verbal Feedback occurred at the highest rate during Reflection and Feedback at 10.1 verbal behaviors per 30 minutes, but also occurred during Coaching Process (2.8) and

Goal Setting and Action Planning (1.2).

143

Coach 7 used Instructional Statements (IS), Probing Questions (PQ), and

Demonstrations (DM) at the highest rate compared to other coaches, as shown in Table

4-15. Instructional Statements and Demonstrations occurred at their highest rate per 30 minutes during Problem-Solving (IS = 31.2, DM = 11.7), but also occurred during

Reflection and Feedback (IS = 11.1, DM = 6.5) and Goal Setting and Action Planning

(IS = 21.5, DM = 3.6). Probing Questions were used at their highest rate per 30 minutes during Goal Setting and Action Planning (13.54), but they were also used at a high rate during Personal/Pleasantries (8.3), a code which is not related to the practice-focus.

144

Table 4-1. Coach demographics. Prior Months of Holds a Current Consultation or Training on Experience Coach Age Highest Degree Experience Teaching Coaching Adult Learning on a Ages 0-5a Certificate Experience Prior to Study Research Project 1 28 Masters 18 Yes Yes Yes Yes 2 31 Masters 9 Yes No Yes Yes 3 28 Masters 20 Yes Yes No Yes 4 59 Masters 0 No Yes Yes Yes 5 65 Bachelors 300 Yes No Yes No 6 63 PhD 174 No Yes No No 7 31 Masters 66 Yes Yes Yes Yes a Lead Teacher, Assistant Teacher, Therapist, or Intern

145

Table 4-2. Teacher demographics. Months of Hold Degree in Degree in Early Teaching mentorship or Teacher Age Highest Degree Special Childhood Experience leadership role Education Children 0-5a in school 1 26 Bachelors Yes No 29 No 2 23 Masters No Yes 20 No 3 27 Masters Yes Yes 72 No 4 40 Bachelors Yes No 1 No 5 27 Bachelors Yes No 73 Yes 6 47 Masters Yes No 42 No 7 54 Bachelors No No 68 Yes a Lead Teacher, Assistant Teacher, or Intern

146

Table 4-3. Comparison of implementation of select coaching protocol variables for all teachers coached by each coach and for each teacher coached in the present study Average Percent of Average Average Percent of Impleme Impleme Number Embedded Log Log Session Session Number Embedded ntation ntation of Action Instruction Fidelity Fidelity Duration Duration of Action Instruction Coach Fidelity Fidelity Plans All Practices All Each All Each Plans Practices All Each Teachers Addressed Teachers Teacher Teachers Teacher Each Addressed Teachers Teacher M Each M (SD) M (SD) M M Teacher All M (SD) M (SD) (range) Teacher (range) (range) Teachers M(SD) M (SD)a 1 92.2 95.0 91.5 90.8 38 37 3.3 5 67 64.3 (4.2) (1.9) (4.9) (3.9) (22-56) (22-50) (3-4) 2 96.3 96.3 91.3 91.3 33 33 4 4 100 100.0 (3.3) (3.3) (4.4) (4.4) (25-50) (25-50) 3 93.0 94.0 87.7 86.2 35 34 4.5 5 93 85.7 (4.6) (4.2) (5.6) (3.9) (13-75) (13-71) (4-5) 4 93.1 92.0 92.6 91.4 47 46 5 5 100 100 (4.0) (4.0) (2.1) (0.8) (27-65) (29-60) 5 88.1 88.8 88.2 88.2 40 36 4.5 4 95 100 (8.1) (3.7) (4.8) (3.3) (24-75) (26-45) (4-5) 6 89.8 87.9 88.8 88.0 54 50 3 3 86 92.9 (5.1) (4.7) (4.5) (4.6) (40-84) (40-60) 7 90.4 90.4 90.2 92.0 64 65 4 4 100 100 (6.6) (8.0) (5.4) (4.7) (32-92) (35-92) Mean 91.4 91.6 90.0 89.6 54 43 3.9 4.3 88.8 91.8 (SD) (5.8) (5.2) (4.7) (4.2) (20-92) (11.6) (3-5) (0.8) (14.5) (13.3) a Percentages are based on 14 key embedded instruction practices.

147

Table 4-4. Duration of conversation focus with initiation modifier occurrence agreement by code. Before Booster Session After Booster Session Duration of Overall Average Conversation Focus Average Average for for For Videos Codes 1 2 3 4 5 6 7 8 9 10 Videos Videos 1-10 1-4 5-10

Personal/ Pleasantries 100 50.0 66.7 100 79.2 100 100 -- 100 -- 100 100 89.6 Coach 100 50.0 100 -- 83.3 100 100 -- 100 -- 100 100 92.9 Teacher 100 -- 0.0 100 66.7 ------100 100 75.0

Coaching Process 50.0 66.7 66.7 57.1 60.1 20.0 100 100 100 50.0 100 78.3 71.0 Coach 50.0 66.7 66.7 33.3 54.2 20.0 100 100 100 50.0 100 78.3 68.7 Teacher ------40.0 40.0 -- 100 100 -- -- 100 100 85

Goal Setting and Action Planning 71.4 100 50.0 90.0 77.9 0.0 100 100 28.6 100 100 71.4 74.0 Coach 66.7 100 50.0 100 79.2 0.0 100 100 33.3 100 100 72.2 75.0 Teacher 100 100 -- 80.0 93.3 -- 100 0.0 0.0 -- 100 66.7 80.0

Problem Solving 100 -- -- 100 100 -- 100 -- 0.0 100 -- 100 100 Coach ------100 -- -- 100 -- 100 100 Teacher 100 -- -- 100 100.0 ------0.0 -- -- 0.0 66.7

Reflection and Feedback 90.9 85.7 50.0 76.9 75.9 80.0 100 100 50.0 72.7 100 83.8 80.6 Coach 71.4 66.7 60.0 62.5 65.1 50.0 100 85.7 60.0 62.5 100 76.4 71.9 Teacher 80.0 60.0 0.0 100 60.0 50.0 100 50.0 0.0 16.7 100 52.8 55.7

Summarizing 100 60.0 50.0 -- 70.0 75.0 100 100 100 60.0 100 89.2 82.8

148

Table 4-5. Verbal behavior occurrence agreement by code. Before Booster Session After Booster Session Verbal Behavior Average Average Overall Codes for for Average 1 2 3 4 5 6 7 8 9 10 Videos Videos For Videos 1-4 5-10 1-10

Neutral Statement 79.2 78.7 54.5 71.6 71.0 65.2 88.0 91.0 86.3 84.3 86.6 83.6 78.5

General Praise & 65.7 70.6 0.0 54.5 47.7 55.6 90.9 88.9 90.0 72.0 66.7 77.3 65.5 Agreement

Supportive Verbal 81.8 100 33.3 33.3 62.1 100 100 100 100 60.0 100 93.3 80.8 Feedback

Demonstration 85.7 -- 0.0 63.3 49.7 -- -- 85.7 100 0.0 100 71.4 62.1

Constructive Verbal 71.4 -- 20.0 41.7 44.4 100 100 50.0 100 38.5 0.0 64.7 58.0 Feedback

Instructional 80.0 100 42.1 47.9 67.5 0.0 100 88.2 86.7 71.4 100 74.4 71.6 Statements

Clarifying 75.0 81.8 50.0 55.9 65.7 82.6 90.0 78.1 84.6 73.7 94.7 84.0 76.6 Questions

Probing Questions 83.3 83.3 50.0 75.8 73.1 100 50.0 85.7 81.8 75.0 100 82.1 78.5

149

Table 4-6. Percent occurrence agreement and Cohen’s kappa by coaching debrief meeting video. Overall Percent Coaching Debrief Meeting Kappa Occurrence Agreement

1 86.7 0.83

2 84.5 0.78

3 66.0 0.56

4 79.2 0.73

5 77.3 0.71

6 93.3 0.90

7 93.3 0.91

8 90.0 0.86

9 84.2 0.80

10 94.2 0.93

150

Table 4-7. Percent of time in each conversation focus. Conversation Focus Occasion 1 Occasion 2 Occasion 3 Overall

Personal/Pleasantries 1.4 1.2 1.0 1.2

Coaching Process 11.2 10.9 8.8 10.4 Goal Setting & Action 39.2 23.2 36.6 33.6 Planning Problem Solving 4.3 4.2 9.7 5.9

Reflection and Feedback 36.3 54.8 39.0 42.7

Summarizing 7.7 5.6 4.8 6.2

151

Table 4-8. Percent of time in each conversation focus by dyad. Conversation Focus Dyad PP CP GS PS RF SM 1 1.8 6.5 31.6 3.3 40.0 16.8

2 0.6 22.8 35.7 9.9 29.0 2.1

3 0.3 6.1 39.1 10.5 40.9 3.1

4 0.8 14.3 20.6 1.6 48.9 13.8

5 1.6 9.4 35.4 0.0 40.1 4.9

6 1.2 10.6 27.9 6.1 49.0 5.1

7 1.9 5.7 39.5 8.1 43.8 1.0

Mean 1.2 10.8 32.8 5.6 41.7 6.7 (SD) (0.6) (6.0) (6.8) (4.1) (6.8) (6.1) Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

152

Table 4-9. Percent of time in each conversation focus for each dyad across occasions.

Dyad PP CP GS 1 2 3 1 2 3 1 2 3 1 0.0 0.0 3.2 2.0 6.5 12.9 44.9 6.5 35.5 2 0.0 0.0 0.0 29.7 26.9 14.6 40.5 11.5 45.8 3 0.0 0.0 0.0 7.0 6.7 3.7 59.2 20.0 11.1 4 0.0 2.9 0.0 17.0 14.3 10.3 32.1 8.6 13.8 5 3.6 2.2 0.0 10.7 8.7 11.8 28.6 47.8 35.3 6 1.7 2.1 0.0 10.2 14.9 7.1 30.5 12.8 40.5 7 3.3 1.7 1.4 8.2 5.2 5.6 31.1 37.9 47.2 Mean 1.2 1.3 0.7 12.1 11.9 9.4 38.1 20.7 32.7 (SD) (1.6) (1.2) (1.2) (9.0) (7.7) (4.0) (11.0) (16.0) (14.6) Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

153

Table 4-9 (Continued.) PS RF SM Dyad 1 2 3 1 2 3 1 2 3 1 18.2 0.0 0.0 30.6 58.1 38.7 14.3 29.0 9.7 2 0.0 0.0 22.9 24.3 61.5 14.6 5.4 0.0 2.1 3 12.7 13.3 0.0 19.7 56.7 77.8 1.4 3.3 7.4 4 0.0 0.0 6.9 34.0 65.7 55.2 17.0 8.6 13.8 5 0.0 0.0 0.0 46.4 37.0 50.0 10.7 4.3 2.9 6 0.0 8.5 11.9 49.2 61.7 35.7 8.5 0.0 4.8 7 3.3 5.2 13.9 50.8 50.0 31.9 3.3 0.0 0.0 Mean 4.9 3.9 7.9 36.4 55.8 43.4 8.6 6.5 5.8 (SD) (7.5) (5.4) (8.8) (12.5) (9.7) (20.0) (5.7) (10.4) (4.8) Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

154

Table 4-10. Percent initiations by coaches and teachers by conversation foci. Conversation Focus Coach Teacher Overall 77.0 23.0 Personal/Pleasantries 80.0 20.0 Coaching Process 83.0 17.0 Goal Setting & Action Planning 80.0 20.0 Problem Solving 50.0 50.0 Reflection and Feedback 65.9 34.1 Summarizing 100.0 0.0

155

Table 4-11. Percent of coach and teacher initiations by conversation focus. Conversation Focus Dyad PP CP GS PS RF SM Overall Coach 80.0 80.0 90.9 100 61.9 100 80.4 1 Teacher 20.0 20.0 9.1 0.0 38.1 0.0 19.6

Coach 75.0 90.9 92.3 66.7 86.7 100 87.8 2 Teacher 25.0 9.1 7.7 33.3 13.3 0.0 12.2

Coach 100 100 70.0 66.7 76.9 100 82.1 3 Teacher 0.0 0.0 30.0 33.3 23.1 0.0 17.9

Coach 50.0 87.5 100 100 65.4 100 81.4 4 Teacher 50.0 12.5 0.0 0.0 34.6 0.0 18.6

Coach 100 100 100 0.0 63.2 100 83.3 5 Teacher 0.0 0.0 0.0 0.0 36.8 0.0 16.7

Coach 100 66.7 55.6 50.0 50.0 100 67.3 6 Teacher 0.0 33.3 44.4 50.0 50.0 0.0 32.7

Coach 57.1 62.5 64.7 0.0 66.7 100 63.1 7 Teacher 42.9 37.5 35.3 100 33.3 0.0 36.9

Mean Coach 80.3 83.9 81.9 54.8 67.2 100.0 77.9 Teacher 19.7 16.1 18.1 31.0 32.8 0.0 22.1 (SD) (21.0) (15.0) (18.1) (36.6) (11.7) (0.0) (9.1) Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

156

Table 4-12. Percent of coach and teacher initiations across occasions. Conversation Focus Occasion 1 Occasion 2 Occasion 3 Overall Coach 83.2 78.4 70.4 Teacher 16.8 21.6 29.6

Personal/Pleasantries Coach 81.8 90.9 62.5 Teacher 18.2 9.1 37.5

Coaching Process Coach 93.8 85.7 68.8 Teacher 6.3 14.3 31.3

Goal Setting & Action Coach 79.2 88.2 76.5 Planning Teacher 20.8 11.8 23.5

Problem Solving Coach 75.0 33.3 42.9 Teacher 25.0 66.7 57.1

Reflection and Coach 74.4 61.9 62.3 Feedback Teacher 25.6 38.1 37.7

Summarizing Coach 100.0 100.0 100.0 Teacher 0.0 0.0 0.0

157

Table 4-13. Percent of coach and teacher initiations across time. Dyad Occasion 1 Occasion 2 Occasion 3 Overall 1 Coach 95.0 75.0 70.0 80.4 Teacher 5.0 25.0 30.0 19.6

2 Coach 87.5 92.9 85.2 87.8 Teacher 12.5 7.1 14.8 12.2

3 Coach 82.4 80.0 83.3 82.1 Teacher 17.6 20.0 16.7 17.9

4 Coach 84.2 80.0 80.0 81.4 Teacher 15.8 20.0 20.0 18.6

5 Coach 100.0 72.2 83.3 83.3 Teacher 0.0 27.8 16.7 16.7

6 Coach 68.8 66.7 66.7 67.3 Teacher 31.3 33.3 33.3 32.7

7 Coach 74.1 90.0 42.9 63.1 Teacher 25.9 10.0 57.1 36.9

Mean Coach 84.6 79.5 73.1 77.9 Teacher 15.4 20.5 26.9 22.1 (SD) (11.0) (9.4) (15.1) (9.1)

158

Table 4-14. Rate of coach verbal behavior per 30 minutes. Coach Verbal Behavior Occasion 1 Occasion 2 Occasion 3 Overall Neutral Statement 55.0 66.4 64.1 61.2 General Praise and Agreement 9.7 9.9 10.4 10.0 Supportive Verbal Feedback 3.7 1.7 2.9 2.8 Constructive Verbal Feedback 2.5 3.7 3.3 3.1 Instructional Statements 8.3 7.9 12.5 9.5 Clarifying Questions 15.3 11.0 13.6 13.5 Probing Questions 5.1 3.2 5.1 4.5 Demonstration 1.9 1.2 2.9 2.0

159

Table 4-15. Coach differences in rate of coach verbal behavior per 30 minutes. Verbal Behavior a Coach NS PA SF CF IS CQ PQ DM Overall

1 50.6 6.2 4.5 1.3 7.0 12.6 8.0 2.9 93.2

2 74.4 8.8 3.5 5.3 15.5 21.9 2.7 0.3 132.9

3 49.2 10.5 1.9 1.2 2.3 21.5 2.3 0.2 88.8

4 75.9 18.1 2.4 4.7 9.1 14.5 4.9 2.9 133.0

5 54.7 10.0 1.8 2.3 6.2 14.1 3.1 0.5 93.4

6 48.5 9.4 1.2 1.0 6.2 4.4 1.2 0.0 72.0

7 64.4 6.5 3.9 5.0 15.9 8.2 7.7 5.1 116.6 Mean 59.7 9.9 2.7 3.0 8.9 13.9 4.3 1.7 104.3 (SD) (11.9) (4.0) (1.2) (1.9) (5.1) (6.4) (2.7) (2.0) (23.5)

Note. NS=Neutral Statement. PA=General Praise and Agreement. SF=Supportive Verbal Feedback. CF=Constructive Verbal Feedback. IS=Instructional Statements. CQ=Clarifying Questions. PQ=Probing Questions. DM=Demonstration.

160

Table 4-16. Rate of coach verbal behavior per 30 minutes across occasions. Coach Occasion 1 Occasion 2 Occasion 3 Overall 1 95.4 74.0 107.8 93.2 2 91.2 133.0 197.8 132.9 3 73.3 107.0 110.0 88.8 4 131.7 132.2 136.5 133.0 5 117.9 88.8 81.3 93.4 6 63.1 72.9 83.0 72.0 7 121.0 106.8 120.8 116.6 Mean 99.1 102.1 119.6 104.3 (SD) (25.6) (24.92) (39.66) (23.5)

161

Table 4-17. Rate of coach verbal behavior within each conversation focus per 30 minutes. Conversation Foci Coach Verbal Behavior PP CP GS PS RF SM

Neutral Statement 160.9 88.7 54.4 54.4 59.3 51.4

General Praise and 10.9 2.2 6.3 5.0 15.2 11.8 Agreement Supportive Verbal 2.7 0.6 0.7 0.6 3.9 12.9 Feedback Constructive Verbal 0.0 0.3 1.2 2.8 5.6 2.7 Feedback Instructional Statements 0.0 2.5 14.1 18.3 8.0 0.0

Clarifying Questions 16.4 30.0 14.3 7.2 11.2 2.7

Probing Questions 5.5 3.5 6.4 3.9 4.0 0.0

Demonstration 0.0 0.1 0.3 0.6 0.4 0.2 Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

162

Table 4-18. Coach 1 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 11.4 3.4 3.2 0.0 0.0 0.0 Agreement Supportive Verbal 2.7 3.4 14.5 0.0 0.0 0.0 Feedback Constructive Verbal 1.3 0.0 1.6 0.0 16.1 0.0 Feedback Instructional Statements 6.7 12.8 0.0 4.2 0.0 0.0

Clarifying Questions 12.12 15.33 1.61 24.94 0.00 60.14

Probing Questions 8.75 11.92 0.00 4.16 8.06 15.04

Demonstration 2.0 5.1 1.6 0.0 8.1 0.0

Neutral Statement 46.5 57.9 40.2 66.5 48.3 75.2

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

163

Table 4-19. Coach 2 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 16.8 7.6 26.1 2.4 2.7 0.0 Agreement Supportive Verbal 8.4 0.8 26.1 0.0 2.7 0.0 Feedback Constructive Verbal 9.3 5.3 26.1 0.0 2.7 0.0 Feedback Instructional Statements 16.8 20.5 0.0 0.0 35.6 0.0

Clarifying Questions 18.68 23.49 13.06 30.90 10.96 0.00

Probing Questions 4.67 3.79 0.00 0.00 0.00 0.00

Demonstration 0.9 0.0 0.0 0.0 0.0 0.0

Neutral Statement 76.6 64.4 78.4 91.5 60.3 283.5

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

164

Table 4-20. Coach 3 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 16.7 6.0 22.8 0.0 6.8 0.0 Agreement Supportive Verbal 2.9 0.0 22.8 0.0 0.0 0.0 Feedback Constructive Verbal 2.9 0.0 0.0 0.0 0.0 0.0 Feedback Instructional Statements 1.7 3.6 0.0 0.0 2.3 0.0

Clarifying Questions 22.49 21.70 0.00 38.65 15.77 0.00

Probing Questions 2.31 2.41 0.00 0.00 4.51 0.00

Demonstration 0.0 0.6 0.0 0.0 0.0 0.0

Neutral Statement 62.3 31.3 53.2 73.4 49.6 210.8

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

165

Table 4-21. Coach 4 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 28.1 9.9 18.5 3.6 0.0 0.0 Agreement Supportive Verbal 3.6 0.0 5.5 0.0 0.0 0.0 Feedback Constructive Verbal 9.9 0.0 0.0 0.0 0.0 0.0 Feedback Instructional Statements 10.9 16.1 0.0 3.6 15.5 0.0

Clarifying Questions 9.37 18.53 5.54 39.26 15.53 0.00

Probing Questions 5.20 1.24 0.00 16.06 0.00 0.00

Demonstration 5.7 1.2 0.0 0.0 0.0 0.0

Neutral Statement 78.0 75.4 62.8 98.1 62.1 190.8

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

166

Table 4-22. Coach 5 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 17.3 5.8 5.3 2.7 0.00 32.3 Agreement Supportive Verbal 1.3 0.0 15.8 5.5 0.00 0.0 Feedback Constructive Verbal 3.2 1.5 10.5 0.0 0.00 0.0 Feedback Instructional Statements 5.1 9.4 0.0 8.2 0.00 0.0

Clarifying Questions 16.00 13.07 0.00 27.33 0.00 32.33

Probing Questions 4.48 3.63 0.00 0.00 0.00 0.00

Demonstration 0.0 0.0 5.3 2.7 0.00 0.0

Neutral Statement 64.0 48.7 36.9 65.6 0.00 242.5

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

167

Table 4-23. Coach 6 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 10.8 9.5 15.9 1.9 10.1 0.0 Agreement Supportive Verbal 2.1 0.0 4.0 0.0 0.0 0.0 Feedback Constructive Verbal 1.2 0.0 0.0 0.0 6.7 0.0 Feedback Instructional Statements 5.4 11.7 0.0 0.0 6.7 0.0

Clarifying Questions 3.33 3.65 0.00 17.29 0.00 0.00

Probing Questions 1.25 2.19 0.00 0.00 0.00 0.00

Demonstration 0.0 0.0 0.0 0.0 0.0 0.0

Neutral Statement 43.7 40.9 51.5 84.5 53.7 156.5

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

168

Table 4-24. Coach 7 rate and type of verbal behavior per 30 min by conversation focus. Conversation Focus Coach Verbal Behavior RF GS SM CP PS PP General Praise and 9.3 4.4 0.0 2.8 3.9 16.7 Agreement Supportive Verbal 6.8 0.8 46.4 0.0 0.0 8.3 Feedback Constructive Verbal 10.1 1.2 0.0 2.8 0.0 0.0 Feedback Instructional Statements 11.1 21.5 0.0 5.5 31.2 0.0

Clarifying Questions 6.1 9.2 0.00 33.1 2.0 0.00

Probing Questions 3.6 13.5 0.00 2.8 7.8 8.3

Demonstration 6.5 3.6 0.0 0.0 11.7 0.0

Neutral Statement 55.7 66.9 61.9 126.7 54.7 125.0

Note. RF=Reflection and Feedback. GS=Goal Setting and Action Planning. SM=Summarizing. CP=Coaching Process. PS=Problem Solving. PP=Personal/Pleasantries.

169

OVERALL PERCENT OF TIME IN EACH CONVERSATIONAL FOCI Personal/Pleasantries (PP) Problem Solving Summarizing (SM) Coaching Process (CP) Goal Setting and Action Planning (GS) Reflection and Feedback (RF)

PP PS 1% 6% SM 6%

CP 10% RF 43%

GS 34%

Figure 4-1. Percent of total time in each conversation foci.

170

COACH DIFFERENCES IN PERCENT OF TIME IN EACH CONVERSATION FOCUS

Reflection and Feedback Goal Setting and Action Planning Coaching Process Summarizing Problem Solving Personal/Pleasantries C O A C H 7 43.8 39.5 5.7 1 8.1 1.9

C O A C H 6 49 27.9 10.6 5.1 6.1 1.2

C O A C H 5 40.1 35.4 9.4 4.9 0 1.6

C O A C H 4 48.9 20.6 14.3 13.8 1.6 0.8

C O A C H 3 40.9 39.1 6.1 3.1 10.5 0.3

C O A C H 2 29 35.7 22.8 2.1 9.9 0.6

C O A C H 1 40 31.6 6.5 16.8 3.3 1.8

Figure 4-2. Percent of time within each conversation foci by coach.

171

PERCENT OF TIME IN EACH CONVERSATION FOCI ACROSS TIME

Time 1 Time 2 Time 3

54.8

39.2

39.0

36.6

36.3

23.2

11.2

10.9

9.7

8.8

7.7

5.6

4.8

4.3

4.2

1.4

1.2 1.0

Figure 4-3. Percent of total time in each conversation foci across time.

Note. PP=Personal/Pleasantries. CP=Coaching Process. GS=Goal Setting and Action Planning. PS=Problem Solving. RF=Reflection and Feedback. SM=Summarizing.

172

RATE OF COACH VERBAL BEHAVIOR PER 30 MIN

Occassion 1 Occassion 2 Occasion 3

66.4

64.1

55.0

15.3

13.6

12.5

11.0

10.4

9.9

9.7

8.3

7.9

5.1

5.1

3.7

3.7

3.3

3.2

2.9 2.9

2.5

1.9

1.7 1.2

NS PA SF CF IS CQ PQ DM

Figure 4-4. Rate of coach verbal behavior per 30 minutes across time.

Note NS=Neutral Statement. PA=General Praise and Agreement. SF=Supportive Verbal Feedback. CF=Constructive Verbal Feedback. IS=Instructional Statements. CQ=Clarifying Questions. PQ=Probing Questions. DM=Demonstration.

173

RATE PER 30 MIN Personal/Pleasantries Coaching Process Goal Setting and Action Planning

Problem Solving Reflection and Feedabck Summarizing

30.0

18.3

16.4

15.2

14.3

14.1

12.9

11.8

11.2

10.9

8.0

7.2

6.4

6.3

5.6

5.5

5.0

4.0

3.9

3.9

3.5

2.8

2.7

2.7 2.7

2.5

2.2

1.2

0.7

0.6

0.6

0.6

0.4

0.3

0.3

0.2

0.1

0.0 0.0 0.0 0.0 0.0

PA SF CF IS CQ PQ DM

Figure 4-5. Rate of coach verbal behavior per 30 minutes within each conversation foci.

Note PA=General Praise and Agreement. SF=Supportive Verbal Feedback. CF=Constructive Verbal Feedback. IS=Instructional Statements. CQ=Clarifying Questions. PQ=Probing Questions. DM=Demonstration. Neutral Statements were excluded from this figure to increase discrimination of verbal behaviors that occurred at much lower rates.

174

CHAPTER 5 DISCUSSION

The purpose of the present study was to explore the process dimension of practice-based coaching (PBC, Snyder, Hemmeter, & Fox, 2015), by measuring observable coach-teacher interactions within a coaching partnership. Data were obtained from the first-year data set for a larger efficacy trial examining Tools for

Teachers (TfT), a professional development (PD) intervention designed to support classroom teachers’ use of embedded instruction with preschool children with disabilities (Snyder, Algina et al.,2015). Coaching debrief meetings recorded for the TfT efficacy trial were used in the present study to quantify the percent of time spent in each conversation foci, the number of coach and teacher initiations, and the rate of coach verbal behaviors. The sample included 21 video-recorded coaching debrief meetings representing seven unique dyads across three occasions. These data were coded using the CPOT-RVI (Shannon & Snyder, 2016), a continuous timed-event observational coding system. Descriptive statistics were used to analyze the data.

In this chapter, findings from the present study are interpreted, implications for coach-based professional development are discussed in the context of PBC and evidence-based PD, and limitations of the present study and recommendations for future research and practice are identified.

Interpretation of Findings

Fidelity of Implementation

PBC is composed of three cyclical components: (a) strengths and needs assessment to inform shared goals and action planning, (b) focused observation, and

(c) reflection and feedback. The components occur within the context of a collaborative

175

partnership and focus on a set of interrelated environmental, interactional, or teaching practices (Snyder et al., 2015; Figure 1-1). Coaches’ adherence to the coaching debrief meeting protocol in year one of the larger study (M = 91.4, SD = 5.8) and in the present study (M = 91.6, SD = 5.2) was high. The coaching debrief meeting protocol used in the efficacy study had required indicators designed to elicit five of the six conversation foci in the present study (i.e., Personal Pleasantries, Coaching Process, Goal Setting and

Action Planning, Reflection and Feedback, and Summarizing). One of the conversation foci (i.e., Problem Solving) was not explicitly part of the required protocol indicators for the efficacy study, but was a recommended coaching strategy that coaches could use.

Overall, findings in the present study show coaches’ distribution of time in each conversation focus is aligned with the high fidelity of implementation scores demonstrating adherence to the protocol in the larger study. It is not surprising that the largest percent of conversation focus time was allocated to Reflection and Feedback (M

= 41.7%, SD = 6.8) and Goal Setting and Action Planning (M = 32.8%, SD = 6.8), two of the key components of the PBC framework.

Three of the coach verbal behaviors measured were required components of the

PBC framework and coaching protocol used in the larger TfT efficacy study: Supportive

Verbal Feedback, Constructive Verbal Feedback, and Clarifying Questions. Supportive

Verbal Feedback is addressed by three indicators in the coaching debrief meeting protocol and occurred on average at a rate of 2.7 times per 30 minutes (range across coaches 1.2 – 4.5, Table 4-17). Based on the protocol, it is expected coaches would use Supportive Verbal Feedback consistently throughout the 15 coaching debrief meetings, and data in the present study confirm that Supportive Verbal Feedback was

176

provided all coaches across the three sampled occasions. Constructive Verbal

Feedback is addressed by three indicators in the coaching debrief protocol and occurred 3.0 times per 30 minutes on average (range across coaches = 1.0 – 5.3, Table

4-15). Based on the protocol for the efficacy study, it was expected that coaches would abstain from Constructive Verbal Feedback in their first coaching debrief meeting and slowly increase their use of this strategy over time as the collaborative partnership developed. The data indicate coaches slightly increased their rate of Constructive

Verbal Feedback per 30 minutes from occasion 1 to occasions 2 and 3, at 2.5, 3.7, 3.3 times per 30 minutes, respectively (Table 4-16). Clarifying Questions occurred, on average, 13.9 times per 30 minutes (range across coaches = 4.4 – 21.9, Table 4-15).

These questions occurred at the highest rate during the Coaching Process duration code (30.0 per 30 minutes, Table 4-17), when the coach and teacher confirmed roles and responsibilities for the coming week, the time of their next meeting, and how the teacher wanted the coach to provide support in the next focused observation. The use of all three required verbal behaviors by coaches in the present sample was aligned with the guidance provided through the PBC training, the coaching manual, ongoing coaching implementation support coaches received on weekly videoconference meetings, and the coaching protocol received by all coaches.

Previous studies have reported the importance of coach training, manuals, protocols, and ongoing fidelity feedback and support for ensuring coaching practices are implemented as intended in early childhood contexts (Neuman & Wright, 2010; Powell &

Diamond, 2013; Snyder, Hemmeter et al., 2015). For example, Powell and Diamond’s

(2013) analyses of transcribed coach-teacher debrief meetings found the coaching

177

protocol ensured teachers received an early literacy and language intervention with nearly identical content foci and no statistically significant differences in the frequency of coaches’ statements. Similarly, Neuman and Wright’s (2010) analysis of weekly self- report coaching logs from coaches involved in supporting teachers to implement early literacy practices indicated the combination of initial training and on-going coaching meetings facilitated the coaches’ adherence to and overall fidelity of implementation of project-based coaching practices.

The present study provides further evidence confirming the importance of operationally defining the components of coaching (i.e., strengths and needs assessment, action planning and goal setting and action planning, focused observation, reflection and feedback) and the actions or behaviors of coaches related to these components (i.e., observable indicators on the coaching implementation protocol). Job- aids for coaches (e.g., coach manuals, coaching protocols) plus ongoing supports for enhancing coaches’ fidelity of implementation of the coaching protocol throughout the coaching cycle are critical. In the larger study, coaches received support from the principal investigators, project coordinator, and lead coaches. These supports were provided at three levels of intensity: universal, targeted, and individualized. Universal support included (a) four hours of initial training on the content, (b) 8 hours of training on the structure and coaching process, and (c) the provision of a coach manual and protocols. Targeted supports included two, 1-hour coaching calls, one across and one within implementation sites each week to (a) refine coach’s knowledge of the 14 key embedded instruction practices, (b) discuss coaching strategies and materials likely to facilitate teacher’s acquisition of these practices, and (c) provide targeted feedback

178

about common fidelity of implementation needs (e.g., providing constructive feedback, writing a quality action plan goal, using graphic feedback). All coaches received individual performance feedback on their fidelity of implementation of the coaching protocol on a minimum of three occasions. Individual and more frequent support from the project coordinator and lead coach was also provided as needed to address adherence, quality, and timing of coaches’ implementation of the project-developed protocol, individual content area needs, and the development of high-quality action plans. This tiered support structure was designed to (1) enhance coaches’ implementation fidelity of observable protocol indicators, and (2) increase the probability that the TfT coach-based professional development intervention designed to install new interactional and teaching practices related to embedded instruction would achieve the intended outcomes (Durlak & DuPre, 2008; Hulleman et al., 2013; Snyder et al., 2017).

Coaching Implementation Quality

Implementing the coaching protocol with fidelity ensured teachers received practice-based coaching as intended. Fidelity required adherence to (a) the content focus of the specified set of teaching practices (i.e., 14 embedded instruction practices);

(b) structure (i.e., weekly coaching for sessions 2-15, which included a focused observation and debrief); and (c) processes used to implement the components of PBC

(i.e., strengths and needs assessment, shared goals and action planning, focused observation, and reflection and feedback) and required coach actions associated with the components ( e.g., providing supportive and constructive verbal feedback, facilitating reflection in the context of a coaching debrief meeting).

Although coach adherence to the PBC coaching protocol was high (Table 4-5), the conversation focus, initiation, and verbal behavior data collected in the present

179

study suggest there was variability in coach behavior. The variability observed indicates there were differences in the quality of PBC implementation among coaches, where quality was defined by the exemplars shared in training and the clarifications regarding coach behavior presented in the coach manual. At least two studies (Jayaraman et al.,

2015; Powell, Steed et al., 2010) to date have coded and quantified the frequency of coach verbal behaviors or their potential function within differing types of coach-teacher interactions or conversation foci. The data from the present study contribute to a small but significant body of research that is focused on identifying how dyadic differences influence the active ingredients of coaching and the intensity with which they are used to support teachers’ acquisition of new knowledge and skills. A better understanding of coach-based implementation variability may help to inform programmatic decisions about how to hire, train, and provide ongoing support to coaches. Decisions about these competency drivers of implementation science when made in combination with programmatic decisions about technical and adaptive leadership and organizational policies and data-analytic systems have the potential help programs to build more effective, efficient, and sustainable professional development plans aligned with teacher and child outcomes (Fixsen et al., 2005; Metz et a., 2013).

Conversation foci

Although all coaches in the present study implemented the protocol with high fidelity and consistently emphasized conversation foci associated with the key components of PBC across the coaching debrief meetings sampled, coach-teacher dyads exhibited noteworthy variability in the average percent of time spent in Reflection and Feedback (range across coaches = 29 - 49, Table 4-8), Goal Setting and Action

Planning (range across coaches = 20.6 - 39.5, Table 4-8), and to some extent in other

180

conversation foci (i.e., Coaching Process, Summarizing, Problem Solving).

Demographic and implementation information collected about the coaches and teachers in the present study sample provide some insight into how coach training and experience might have contributed to this variation. However, it is likely that preferred learning and interactional styles, individual learning histories, motivational variables, and evolving accommodations and adaptations between the coach and teacher as part of the transactional and collaborative partnership also influenced the variability found in the present study.

For example, demographic information indicates the amount of time spent in

Coaching Process and Summarizing conversations varied based on teachers’ or coaches’ classroom experience. Dyads 1, 2, and 4 had the least experience working with children birth to age five (range across coaches 1, 2, 4 = 0 - 18 months, range across teachers 1, 2, 4 = 1 - 29 months). These dyads also had higher proportions of time in one or both of these conversation foci. These findings suggest novice teachers may need more support to understand their role in the Coaching Process and what actions to take between coaching sessions than more experienced teachers. They may also benefit from more classroom examples that illustrate key concepts, like those captured by the Summarizing conversation foci, that facilitate the development of conceptual frameworks for how to enact teaching practices (NRC, 2000). Although beneficial for building teachers’ conceptual framework for what embedded instruction looks like in a preschool classroom, Summarizing is a coach-talk-only code. Lengthy streams of coach verbal behavior could overwhelm the teacher if the coach presents too much information in the absence of the opportunity to collaboratively reflect on

181

descriptions of what occurred during the focused classroom observation. Coaches with less experience in the birth-to-five context may need guidance in determining which examples from the focused observation are most salient for illustrating the key embedded instruction practices and when and how to share exemplars of these practices.

A second example of the coach’s influence on the conversational focus of the collaborative partnership was found in Problem-Solving. Six of the seven coaches engaged in Problem Solving. Coach 5 never facilitated Problem Solving conversations with her teacher, but did spend a large percentage (40.1%) of her coaching debrief meetings in Reflection and Feedback. Coach 5 had the greatest amount of experience

(30 years) working with children birth to age 5 and some training on the principles of adult learning, but did not have coaching or consultative experience prior to coaching in the TfT efficacy study. When, Teacher 5 raised instructional dilemmas during the coaching debrief meetings, Coach 5 responded by empathizing or providing solutions based on her experience rather than using the facilitation skills necessary to guide

Teacher 5 through a collaborative Problem-Solving conversation (Peterson, 2013).

Personal/Pleasantries was observed in all dyads for less than 2% of the total time and typically occurred at the beginning and end of the coaching debrief meeting, which is well aligned to the protocol indicators and fidelity of PBC implementation.

However, only Dyad 7 consistently engaged in this conversation foci at all three occasions (Table 4-9). This brief opportunity to check in at the beginning of the coaching debrief meeting and to thank the teacher for her time at the end of the meeting appeared to build rapport, commitment, and trust between the coach and teacher when

182

it did occur. In addition, it provided an opportunity for the teacher to share personal events that may have influenced her implementation of the new teaching practices (e.g., job-demands, family or medical concerns, other experiences or sources of information influencing her use of teaching practices). Given Personal/Pleasantries is one of the more salient aspects of a collaborative partnership it is unfortunate this conversation focus was not consistently observed for a measureable duration in all dyads.

Verbal behavior

The rate of coach verbal behavior is an indication of the number of turns or exchanges that occurred within each conversation focus and the number of times coaches engaged in each of the eight verbal behaviors (i.e., Neutral Statements,

General Praise and Agreement, Supportive Verbal Feedback, Constructive Verbal

Feedback, Instructional Statements, Clarifying Questions, Probing Questions, and

Demonstration). Coaches’ average rate of verbal behavior per 30 minutes across all verbal behavior codes was 104.3 (SD = 23.5, range = 72.0 – 133.0, Table 4-15). Similar to the conversation foci, coaches consistently used the required PBC protocol verbal behaviors (Supportive Verbal Feedback, Constructive Verbal Feedback, Clarifying

Questions) across all three occasions observed and coded for the present study (Table

4-14); however, they varied in both the intensity (i.e., rate per 30 minutes, Table 4-15) of verbal behaviors used and function (i.e., continued interaction related to a specific conversation focus) of verbal behaviors used (Tables 4-18 to 4-24).

General praise and agreement and supportive verbal feedback. Providing positive feedback about the teacher’s demonstration of behaviors associated with overall classroom quality and the use of embedded instruction teaching practices during the coaching debrief meeting is a required component of the PBC protocol. Within the

183

CPOT-RVI coding system, positive feedback was captured through the use of two verbal behavior codes: (1) General Praise and Agreement and (2) Supportive Verbal

Feedback. General Praise and Agreement acknowledged something the teacher said, did during the observation, or a product the teacher made; but it did not support the teacher to know the specific attributes of his or her actions that made them positive. In contrast, Supportive Verbal Feedback serves the same function as General Praise and

Agreement, but specifically describes what the teacher did and, in many cases, why continuing to use specific teaching practices would support fidelity of implementation of the embedded instruction interactional and teaching practices. Because of this distinction, Supportive Verbal Feedback is more likely than General Praise and

Agreement to support teachers to acquire knowledge and skills (e.g., Artman-Meeker &

Hemmeter, 2012; Barton & Wolery, 2007; Casey & McWilliam, 2011; Hemmeter,

Snyder, Kinder & Artman, 2011; Kluger & DeNisis, 1996), especially when used more often or with a higher intensity within the coaching debrief meeting (Bijou, 1993;

Scheeler, Ruhl, & McAfee, 2004).

General Praise and Agreement occurred 9.9 times per 30 minutes on average

(range across coaches = 6.2 – 18.1, Table 4-15). This rate was fairly consistent across occasions, and more than three times the rate of Supportive Verbal Feedback at 2.7 times per 30 minutes (range across coaches = 1.2 – 4.5, Table 4-15). These differences are meaningful. For example, within a 30-minute coaching debrief meeting, Teacher 1 received, on average, five Supportive Verbal Feedback statements and six General

Praise and Agreement statements compared to Teacher 6 who only received one

Supportive Verbal Feedback statement and 10 General Praise and Agreement

184

statements. Both teachers received 11 positive statements within 30 minutes; however, the quality of those statements and the probability that they would support the teacher’s acquisition and mastery of knowledge and skills were markedly different. Coaches who delivered higher rates of General Praise and Agreement, provided lower rates of

Supportive Verbal Feedback. These findings suggest coaches may need additional training and support around the distinctions between these two types of feedback. Both

Supportive Verbal Feedback and General Praise and Agreement are currently addressed in the PBC coach training and through ongoing fidelity feedback within the larger efficacy study. However, fidelity is a binary code recording adherence, that is, whether Supportive Verbal Feedback occurred or did not occur; it does not account for the quality, frequency, or distribution of verbal behavior within each coaching debrief meeting.

On average, and for each coach, Supportive Verbal Feedback occurred at the highest rate during the Summarizing conversation focus, a coach-talk-only code, where the coach summarizes events that occurred in the focused observation or during the debriefing meeting (Tables 4-15 to 4-22). In contrast, General Praise and Agreement occurred at a higher rate than Supportive Verbal Feedback during Reflection and

Feedback, on average, and for each coach. One explanation for the more prevalent use of General Praise and Agreement during Reflection and Feedback is that coaches who are less knowledgeable about the distinction between Supportive Verbal Feedback and

General Praise and Agreement may not use specific positive feedback linked to the practice-focus within an ongoing conversation. Coaches 1, 2, 4, and 7 had concurrent or prior experience working on research projects where PBC was one component of the

185

independent variable and the importance of using specific non-attributive Supportive

Verbal Feedback was emphasized. Coaches 1, 2, 4, and 7 also had the highest rate of

Supportive Verbal Feedback (Table 4-15) and tended to have better self-awareness of their use of particular coaching behaviors as indicated by the higher log fidelity scores

(i.e., what the coach self-reported versus what the observer saw in the video of the coaching debrief meeting, Table 4-3).

Constructive verbal feedback. Constructive Verbal Feedback serves a parallel role to Supportive Verbal Feedback within the PBC framework by raising teachers’ awareness and knowledge of embedded instruction teaching practices, supporting them to self-assess their current implementation of those practices and identifying ways to enhance current implementation or acquisition of new interactional or teaching practices

(Snyder, Hemmeter, & Fox, 2015). Several studies have demonstrated the importance of providing both supportive and constructive feedback concurrently when helping teachers to acquire and master the use of new interactional or teaching practices (e.g.,

Hemmeter et al., 2016; Landry, Anthony, Swank, & Monseque-Bailey, 2009; Powell,

Diamond et al., 2010). Furthermore, in the absence of Constructive Verbal Feedback, teachers may not be aware that they are not implementing a teaching practice with fidelity (Shannon et al., 2015). They are also apt to abandon a practice, because they do not believe it is effective or do not have access to implementation support (Kretlow &

Bartholomew, 2010).

Constructive Verbal Feedback occurred 3.0 times per 30 minutes, on average, but the intensity of teachers’ exposure to Constructive Verbal Feedback varied by coach

(range = 1.0 – 5.3, Table 4-17). For example, Coaches 2, 4, and 7 provided nearly five

186

times as many Constructive Verbal Feedback statements compared to most other coaches. In the current study, Teachers 2, 4, and 7 received approximately five

Constructive Verbal Feedback statements in a 30-minute coaching debrief, whereas

Teachers 1, 3, and 6 received approximately one Constructive Verbal Feedback statement during that same period of time. Given that the intent of Constructive Verbal

Feedback is to support the acquisition and fluency of EBP implementation, one statement once a week about how the teacher could refine their use of interactional and teaching practices in the classroom might not be sufficient.

Coach and teacher demographics appeared to influence the use of Constructive

Verbal Feedback. As noted previously, Coaches 1, 2, 4, and 7 had prior and concurrent experience on research studies where PBC was a component of the independent variable and these coaches exhibited the highest rate of Supportive Verbal Feedback.

Yet, of these coaches, Coach 1 had concurrent but not prior research experience with

PBC, and did not use Constructive Verbal Feedback often as evidenced by her low rate of implementation (i.e., 1.3 statements per 30 minutes). Knowledge of the importance of

Constructive Verbal Feedback is insufficient to ensure coaches will use this verbal behavior with fluency in the context of coaching debrief meetings. Rather, coaches need explicit practice and performance feedback about their coaching and use of verbal behaviors. It is also of interest to note that the teachers who received the most

Constructive Verbal Feedback did not hold degrees in early childhood (Teachers 2 and

7), or had less experience teaching (Teachers 4 and 2). It is possible that the coaches within these dyads had more frequent opportunities to provide constructive feedback in an effort to support the teachers with less experience in the birth-to-five context to bring

187

their teaching practices in alignment with developmentally appropriate practice and the embedded instruction teaching practices that were the focus of the collaborative coaching partnership (Scheeler et al., 2004).

Constructive Verbal Feedback was implemented at the highest rate per 30 minutes during the Reflection and Feedback (5.6), Problem Solving (2.8), Summarizing

(2.7), and Goal Setting and Action Planning (1.2) conversation foci (Table 4-17).

Although Constructive Verbal Feedback requires the same specificity as Supportive

Verbal Feedback, it appears to have been easier for coaches to implement this verbal behavior in response to the teacher’s in situ reflection or in the context of addressing an implementation challenge. The highest rates of Constructive Verbal Feedback occurred during Reflection and Feedback (5.6) and Problem Solving (2.8, Table 4-17), suggesting that the coaches provided teachers with the opportunity to express thoughts, concerns, or questions about the coaches’ recommendations. According to Thurling,

Vermeulen, Bastiaens, and Stijnen (2013), the provision of feedback should be followed by an opportunity for the recipient to respond through collaborative dialogue or demonstration when working with adult learners. What is unknown from these data is whether Constructive Verbal Feedback, delivered in the context of Summarizing, a coach-talk-only conversation focus (2.7 times per 30 min, Table 4-17) was immediately followed by a more collaborative conversation focus. Sequential analyses of the observational data would be necessary to draw these conclusions.

Instructional statements. Instructional Statements occurred on average 8.9 times per 30 minutes (range across coaches = 2.3 – 15.9, Table 4-15) increasing on average from 8.3 to 12.5 between the first and third occasion (Table 4-14). These data

188

suggest coaches may have increased in their confidence and familiarity with the embedded instruction practices over time, thus using Instructional Statements more readily as a coaching strategy. In addition, coaches may have adapted their use of instructional statements to meet the teacher’s phase and pace of learning, and to emphasize they had learned more about the teacher’s classroom context and students in earlier sessions. Instructional Statements are similar to Constructive Verbal

Feedback, because the intent of this verbal behavior is to teach or inform, but they do not require the coach to connect the guidance provided to the teacher’s actions.

Instructional Statements occurred at three times the rate of Constructive Verbal

Feedback, on average, across occasions (Table 4-14 and 4-15).

One explanation for coaches’ proclivity towards Instructional Statements over

Constructive Verbal Feedback may be their level of comfort with directly addressing teacher’s misunderstandings or less effective use of the targeted interactional and teaching practices, that are the content focus of PBC. Killion (2009) contends some coaches in the K-12 context engage in what she refers to as “coaching light” when being liked or viewed as a support is prioritized by the coach over focusing on the teacher’s practices and child outcomes. Within the field of education there is a need for studies that examine coaches’ understanding and perspectives of using constructive performance-based feedback to shape teacher behavior. Focus groups of coaches associated with the larger embedded instruction efficacy trial revealed that some coaches appreciated repeated opportunities to practice providing constructive feedback.

In fact, they reported that ongoing practice and video exemplars of constructive feedback built their confidence in providing this type of performance-based feedback

189

(Embedded Instruction Project, 2017). Despite the apprehension by some coaches to provide this type of feedback, it is important to note that teachers who have been the recipient of constructive feedback in the context of PBC collaborative partnerships have found it beneficial (Shannon et al., 2015).

The use of Instructional Statements, similar to Constructive Verbal Feedback, occurred often during Reflection and Feedback conversations (8.0 times per 30 minutes), but occurred most often during Goal Setting and Action Planning (14.1 times per 30 minutes) and Problem Solving conversations (18.3 times per 30 minutes, Table

4-17). The broad use of Instructional Statements and Constructive Verbal Feedback across multiple conversation foci illustrates that coaches sought opportunities to integrate new information and to challenge misconceptions about the embedded instruction practices or practices related to general classroom quality that support embedded instruction throughout the coaching debrief meeting. The distribution of new information throughout the conversation and in connection with the teacher’s self- evaluation and planning may have prevented teachers from feeling overwhelmed or lectured to and perhaps promoted the collaborative nature of the PBC framework.

Demonstration. Demonstrations (DM) encompassed both verbally modeling and role-playing how to implement specific teaching practices, both of which have been identified as evidence-based adult learning strategies for illustrating how to use practices and confirming adult learners’ understanding (Trivette et al., 2009). On average across all videos, coaches used Demonstration 1.7 times per 30 minutes

(range 0.0 - 5.1, Table 4-15), and the average rate increased from 1.9 to 2.9 times per

30 minutes from occasion 1 to occasion 3 (Table 4-14). However, individual coach data

190

again indicated variability. Only Coaches 1, 4, and 7 used this type of verbal behavior with regularity. The coach’s capacity to use this strategy may be associated with prior consultation and coaching experience in combination with training on adult learning principles, because coaches who had these types of experiences used Demonstration at the highest rates. Demonstration was used in all conversation foci except

Personal/Pleasantries and at the highest rate during Problem Solving (0.6 times per 30 minutes, Table 4-17). Modeling specific teaching or interactional practices, followed by teachers practicing or applying the skill has demonstrated efficacy for supporting acquisition and accurate implementation of teaching practices (Kretlow & Bartholomew,

2010). Problem Solving, in particular, therefore, is a logical and appropriate time for helping teaches to see and practice an interactional or teaching practice in response to an identified classroom dilemma.

Clarifying and probing questions. Two types of questioning verbal behavior were recorded. Clarifying Questions occurred on average 13.9 times per 30 minutes

(range across coaches = 4.4 – 21.9, Table 4-15) and Probing Questions occurred on average 4.3 times per 30 minutes (range across coaches =1.2 – 8.0, Table 4-15). Both types of questions occurred across most conversation foci, signifying the coaches’ efforts to engage the teacher and elicit embedded instruction practice-focused teacher- talk throughout the coaching debrief meeting. Similar to other verbal behaviors, there is variability between coaches in the rate of questioning. Coaches who asked Clarifying

Questions at higher rates, tended to ask fewer Probing Questions (Table 4-15). For example, Coaches 2 and 3 asked more than ten times as many Clarifying Questions as

Probing Questions, totaling more than 20 clarifying questions per 30 minutes. In

191

contrast, Coaches 1, 4, and 7, the coaches with prior training on adult learning consultative or coaching experience asked fewer Clarifying Questions, but at least five

Probing Questions per 30 minutes.

Both Clarifying Questions and Probing Questions are appropriate within the coaching debrief meeting. Probing Questions elicit a more sophisticated level of teacher reflection and participation in the conversation. In addition, the types of questions posed can play an important role in prompting teachers to engage in self-reflection and evaluation. Moyers and colleagues (2007) found open-ended or probing questioning in clinical psychology focused on “change” (e.g., What would have made block play more successful for embedded learning opportunities for Kisharra?) increased the probability of participants’ openness to changing their behavior. If the focus of coaching is enhancing teachers’ use of evidence-based interactional and teaching practices, as in

PBC, then asking probing questions focused on change would be important.

Probing Questions occurred at the highest rate during Goal Setting and Action

Planning (6.4 per 30 minutes, Table 4-17), a logical and strategic time for eliciting the teachers’ perspective about future goals or how the achievement of action steps has influenced the embedded instruction practices and target children’s learning. However,

Probing Questions occurred at the second highest rate during Personal/Pleasantries

(5.5 per 30 min, Table 4-17), a conversation focus that is not directly related to enhancing the teachers’ planning, implementation, or evaluation of embedded instruction practices. Although this is not harmful and may even benefit the collaborative partnership, it would be good to observe coaches using this verbal behavior more often when the conversation focuses on Reflection and Feedback (4.0 per 30 min, Table 4-

192

17) or Problem Solving (3.9 per 30 min, Table 4-17). Increased use of probing questions during Reflection and Feedback and Problem Solving would ensure coaches were eliciting the teachers’ perspective about their implementation of embedded instruction practices. Increased teacher talk would also allow the coach to tailor support provided to address the teacher’s individual needs, misunderstandings, or implementation challenges identified by the teacher.

Beyond the rate of verbal behavior for Clarifying versus Probing Questions, coaches frequently did not provide sufficient wait time for teachers to answer more complex Probing Questions. Coaches often provided less than 3 seconds for teachers to gather their thoughts before providing a follow-up Clarifying Question that could be answered with a yes or no response (e.g., What was challenging about using the visual cue in the antecedent today? Do you think you had the child’s attention?). Coaches may need instruction around wait time and the importance of asking a single question at a time, increasing the teacher’s ability to answer and participate in the coaching debrief meeting (Salisbury et al., 2017). The use of questioning observed in the present study suggests that when, how, and how often to use different types of questions may be an area for additional studies and coach training and support.

The Influence of Demographics on Conversation Foci and Verbal Behavior

Prior training on adult learning principals and prior coaching or consultative experience appeared to influence the duration of some conversation foci (i.e., Coaching

Process and Summarizing) and the rate of some coach verbal behaviors (i.e.,

Demonstration, Probing Questions, Clarifying Questions), while prior exposure to the

PBC framework within the context of a research study also appeared to be important for understanding coaches’ use of Supportive Verbal Feedback and Constructive Verbal

193

Feedback. Although it is unlikely most coaches will participate in large research studies, the data in the present study suggest that a depth of understanding about why the use of specific strategies are essential and how they benefit teacher and child outcomes would be important for all coaches. These findings are aligned with implementation science literature that suggests coach-based professional development initiatives could benefit from setting staff selection criteria or specific qualifications when hiring coaches

(Fixsen et al., 2005; Metz et al., 2013). Furthermore, within coaching frameworks like

PBC that occur around a set of identified interactional and teaching practices, coaches require both content and practice-specific training on how most effectively to use

Supportive and Constructive Verbal Feedback. The evidence indicates that prior coaching and consultative experiences alone did not appear to consistently influence coaches’ use of these coach verbal behaviors, which have been demonstrated to support teacher learning. Yet, multiple years of previous experience implementing the

PBC framework did increase the likelihood that coaches would use these verbal behaviors (i.e., Coach 2, 4, 7). Strategies for providing high-quality performance-based feedback is one area where all coaches would have benefitted from additional training and feedback.

A strength of the PBC framework is the emphasis on “active ingredients” or key components and coaching practices which offer a structured approach to coaching implementation with systematic opportunities for being adaptive and accommodating of individual teachers and their applied practice contexts. Fidelity of implementation literature contends variability and adaptation may benefit teachers by tailoring the PBC framework and embedded instruction content introduced in workshops to meet

194

individual teachers’ needs, preferences, and motivations. However, a “zone of tolerable adaptation” for the PBC framework—defined as the threshold of acceptable change before teacher or child outcomes are impacted by the adaptation (Hulleman et al., 2013) is yet to be established.

Dyadic Transactions Across the Collaborative Partnership

The relationship and collaboration constructs in the broader coaching literature include a combination of interpersonal qualities (e.g., respect, empathy, trust, honesty) that can be influenced by (a) the participants’ culture, dispositions, and beliefs about teaching and young children and how they enact these beliefs through their classroom practices (Barrera & Kramer, 2009; Killion, 2009; Rush & Shelden, 2011) and (b) techniques or communicative skills (e.g., reciprocity, reflective listening, collaboration, choice, open-ended questioning, raising awareness; Knight, 2007; Peterson, 2013).

Within PBC, the collaborative partnership construct encompasses aspects of the broader literature, but is unique because of the emphasis on dyadic relational and participatory transactions (Dunst & Trivette, 2009; Sameroff, 2009). This includes (a) being a respectful, empathetic, and active listener; (b) assessing strengths, preferences, needs, and motivations and using that information to set shared goals and plan actions

(Snyder & Wolfe, 2008); and (c) using a defined coaching protocol, which makes the aims of the coaching process transparent and mutually agreed upon (Snyder et al.,

2015).

The collaborative partnership is the context for all other components of the PBC framework. Positioning all other components within the context of the collaborative partnership exemplifies how the PBC coaching framework strives to be both evidence- based and responsive to individuals. Shifts in the duration of conversation focus, coach-

195

teacher initiations, and coach verbal behavior in the present study illustrate ongoing accommodations and adaptations between the coach and teacher across occasions, as the practice- and goal-focus of the collaborative partnership develop. The moment-to- moment transactions that occur between the coach, teacher, children, and classroom context shape the quality of the collaborative partnership and the work that occurs within the partnership. The successful development of a collaborative partnership has the potential to influence the participants’ ability to achieve the desired outcomes of coaching.

The transactional and cyclical design of PBC is seen in concrete ways through evolving action plans and resources, but also in less apparent relational and participatory transactions that reinforce and shape future behavior and interactions. For example:

 The coach may increase wait time following a Probing Question and learn that, given additional time to process and respond to new information, the teacher participates in the conversational exchange more often, and, over time is more motivated to engage in transactional exchanges.

 The teacher brings her data sheets to the coaching debrief meeting and the coach provides descriptive and specific praise about her practice implementation. The receipt of this positive and descriptive feedback might motivate the teacher to bring data sheets to future sessions.

 The coach and teacher may learn that they find discussions about their own children or an upcoming family event as they gather materials and prepare for the coaching debrief meeting is reinforcing. The occurrence of these events as part of each debrief meeting might lead to a stronger collaborative partnership.

The PBC collaborative partnership consistently focuses on supporting the teacher to acquire and maintain the use of EBP, but the form of that support across different teachers and with the same teacher from session 1 to session 15 was often quite different in the larger TfT study. The selection of coaching practices, including the

196

duration of conversation foci, initiations, and rate of coach verbal behaviors, used to support each individual teacher across the coaching partnership should be aligned with the PBC framework and its components, the strengths and needs of both the coach and the teacher, the teacher’s context (i.e., student population, program type, practice- focus), and available resources (i.e., time, materials, motivations). Observed coach- teacher adaptations and accommodations evident in the data from the present study are discussed below.

Conversation foci

Across the sampled occasions, the allocation of conversation focus time shifted as the dyad developed a collaborative partnership and their current goal- and practice- focus. Table 4-9 shows the percent of time in each conversation foci for each dyad across the three sampled occasions. This table highlights how coaches worked within the structure of the PBC framework, consistently engaging in the Coaching Process,

Goal Setting and Action Planning, and Reflection and Feedback conversation foci. Yet, coaches appeared to individualize and adapt their conversation foci based on the teacher’s phase and pace of learning and progress with current action plan goals and steps. For example, Dyad 4 spent the least amount of time overall in Goal Setting and

Action Planning (20.6%); however, at occasion 1, nearly 32.1% of the time was spent in this conversation foci. Perhaps Dyad 4 became more proficient at establishing goals and using the action plan allowing them to reduce their time in Goal Setting and Action

Planning and to increase their time in Reflection and Feedback at occasions 2 and 3.

Another example of adaptation of the PBC protocol is evidenced in Dyad 7. This dyad never exhibited the highest percentage of time in Personal/Pleasantries. Dyad 7 was the only partnership, however, that consistently engaged in this conversation foci across

197

all three occasions. Coach 7 made accommodations within the coaching protocol and debrief time to re-connect on a personal level with Teacher 7 before focusing on the observation, action plan, and teaching practices each week. This might reflect an important aspect of building the collaborative partnership with this teacher. The behaviors of the coach and teacher in each dyad contributed to the duration of conversation focus over time, with the coach likely having greater influence in earlier sessions, but shifting that influence to be more equitable over time.

Coach-teacher initiations

All teachers in the present study had some experience in the preschool classroom. Coaches facilitated the process of connecting information about teacher’s classrooms gained through embedded instruction workshops directly to the teacher’s classroom. Collaboratively, the coach and teacher identified shared goals that were important within the teacher’s instructional context and adapted the 14 embedded instruction interactional and teaching practices to meet the needs of the teacher’s children and ongoing classroom activities and routines. The cyclical nature of the PBC framework provides systematic and intentional opportunities for the coach and teacher to check-in and to make necessary adaptations to the coaching process and practices, enhancing the likelihood that teachers would find the embedded instruction practices usable, feasible, and sustainable.

On average teachers’ conversation foci initiations increased by 12.8% from the first to the third occasion, with some individual teachers’ initiations increasing by as much as 31.2%, as seen with Dyad 7. Increased teacher initiations signal teachers’ growing confidence and agency in determining the direction of the coaching debrief meeting conversation and may also be an indication of increased teacher competency

198

or knowledge of the embedded instruction teaching practices. As teachers increased their initiations, coaches accommodated by following the teachers’ lead and continuing conversation foci initiated by the teacher. The largest percent of teacher-initiated conversations occurred in Problem Solving (50%) and Reflection and Feedback

(34.1%). These conversation foci might have been initiated more often by teachers because they provided access to coach support to address an immediate embedded instruction need within their classroom. Prior studies of teachers’ perspectives about on- site PBC indicated that many preschool teachers reported they rarely had access to ongoing performance-based feedback and support to address dilemmas in their classrooms, in particular from someone who was knowledgeable about preschool classrooms and practices (Shannon et al., 2015).

Another shift indicating growing teacher confidence and mutual adaptation can be seen in the duration and initiations related to the Coaching Process conversation focus across the three occasions. The duration of Coaching Process decreased over time, indicating dyads were dedicating less time to roles and responsibilities as the collaborative partnership developed. When Coaching Process conversations did occur, however, they were increasingly initiated by the teacher from occasion one (6.3%) to occasion three (31.3%, Table 4-12). Teachers appear to have become more independent in their ability to reference existing action plans to identify how they would accomplish the next step over the coming week and to articulate specific ways in which the teacher wanted to be supported by the coach in the next focused observation.

These developments within the Coaching Process conversation focus suggest teachers gained proficiency in their ability to identify roles and responsibilities within the

199

collaborative coaching partnership, and the coach slowly withdrew allowing the teacher to lead this component of the coaching debrief meeting as she was ready.

Building Teacher’s Confidence and Competence Towards Sustainability

Early intervention has a long history of developing supports for families and practitioners grounded in a strengths-based, ecologically focused, culturally competent, capacity-building approach. This approach was designed to support both families of young children with or at risk for disabilities from birth to age 5 and the practitioners who support them (Dunst & Trivette, 2009; McLean, Sandall, & Smith, 2016, Snyder et al.,

2011). Recommended practices in early intervention/early childhood special education emphasize these features of practice (Division for Early Childhood, DEC, 2014). The

PBC framework used in the present study and the larger efficacy trial encompasses this approach. Furthermore, the transactional exchanges between the coach and the teacher, where both participants contribute their assets including experience, knowledge, skills, and dispositions to the collaborative partnership is integral to the implementation of the PBC framework as are principles from organizational behavior management and applied behavior analysis (Snyder et al., 2015). Supporting teachers to integrate new practices into their existing classroom culture—defined as routines, roles, relationships, responsibilities and ways of using particular artifacts (Frank,

2011)—over time increases the likelihood that teachers will acquire and sustain the use of new interactional and teaching practices.

Active involvement in planning for, implementing, and evaluating one’s use of knowledge and skills, with a small number of learners, over an extended period of time have been identified in the extant literature as the most efficacious characteristics for facilitating adult learning (Trivette et al., 2009; NRC, 2000). Furthermore, meta-analytic

200

work completed by Trivette and colleagues, found the largest effect sizes for adult acquisition of new knowledge and skills are attributed to these characteristics in combination with performance feedback using a conceptual or operational definition to facilitate self-assessment (Trivette et al., 2009). PBC is grounded in these characteristics and encompasses each of them when implemented with fidelity (Snyder et al., 2015). Data from the present study contribute to these conclusions regarding the importance of active involvement and performance-based feedback for, both coaches and teachers, when the intended outcome of an implementation initiative is to build teachers’ capacity to implement interactional teaching practices with fidelity in birth-to- five contexts.

Implications from the Present Study

This study contributes to a growing body of research exploring the use of coaching to enhance preschool teachers’ confidence and competence to use EBP in the classroom. In the present study, PBC was the coaching approach used. Findings have implications for designing and implementing coach-based professional development initiatives and methods for defining, describing, and quantifying how coaches facilitate teacher learning and collaborative partnerships focused around a set of defined teaching practices.

Implications for Coach-based Professional Development

Findings of the present study offer preliminary implications for developing implementation supports as a part of coach-based professional development. First, operationally defining the key components of a coaching framework or model and providing coaches with training, ongoing fidelity feedback, and implementation supports is likely to be associated with high fidelity of implementation of the coaching protocol,

201

and, in turn, fidelity of implementation of the interactional and teaching practices by teachers. These findings are consistent with a broader effort to advance the science of coaching in early childhood by clearly defining the content, structure, and processes of coaching when it is used as an independent variable in research (Powell & Diamond,

2013b; Snyder et al., 2011). This also applies in practice, as states, districts, and programs seek to adopt interventions that are scalable, efficient, and cost-effective for enhancing the quality of interactional and instructional early childhood experiences and child outcomes (Artman-Meeker et al., 2015; Isner et al., 2011; Zaslow et al., 2010). The extent to which coaches have measureable and replicable impacts on teachers’ classroom practices, and, in turn, child outcomes, is contingent on coaches implementing with fidelity an evidence-based coaching framework, like PBC, which employs principles of adult learning theory, has operationally defined coaching practices, and has demonstrated efficacy in achieving teacher and child outcomes across a variety of teaching practices and contexts (Snyder et al., 2015).

Another finding of the present study is that teachers’ experience with the PBC components and coaching strategies appear to vary as a function of (a) coaches’ fidelity and quality of implementation of the coaching protocol and (b) coaches and teachers’ individual demographics, knowledge, and experiences. Due to the transactional nature of the coach-teacher interactions within each dyad and the small sample included in the present study, the individual coach and teacher contributions are difficult to isolate. Yet, the observed variability suggests two important implications. First, it is important to consider the selection, training, and ongoing supports (i.e., job aids, access to formal and informal support, regular fidelity feedback) necessary for ensuing coaches use

202

evidence-based implementation supports like PBC with fidelity, particularly given coaching has been identified as an important competency driver within larger implementation science frameworks (Fixsen et al., 2005). In addition, when specified coach behaviors are occurring at less than desired rates, they should be specifically targeted for additional training. Second, it is important to use an adaptive intervention approach where the intensity of coaching supports and strategies used align with the strengths and needs of individuals receiving those supports (Kretlow & Bartholomew,

2010). For example, it may be that novice or out-of-area teachers benefit from additional coaching or more intensive exposure to certain conversation foci or coach verbal behaviors. If so, it can be inferred that an adaptive approach to coaching and professional development would be more efficacious than a more traditional "one size fits all" approach. The coach-based PD supports provided to aid teachers in the adoption and installation of EBP should be well documented to increase replication and facilitate the development of acceptable thresholds of adaptation. Systematic documentation of these coach and teacher supports, as well as, the environmental factors which appear to aid in or hinder their success (e.g., leadership, policies, practices), are needed to build capacity and scale up coach-based professional development initiatives (Artman-Meeker et al., 2015; Snyder et al., 2012; Zaslow et al.,

2010).

The primary implications of the present study focus on the “competency drivers”

(selection, training, and coaching) within an active implementation science framework

(Fixsen et al., 2005; Metz et al., 2013). The competency drivers, however are only one aspect of supporting the adoption, installation, and sustained use of evidence-based

203

interactional and teaching practices that have the greatest likelihood to enhance child outcomes. Programs must also consider both “organizational drivers” and leadership teams necessary for installing new interventions or innovations. Organizational drivers encompass clearly defined goals and policies, plus mechanisms for enacting those goals and policies through data-informed policy-practice feedback loops. Leadership teams must adopt a technical and adaptive leadership approach. The technical dimension assures the development and use of (a) structured procedures for attaining clearly defined and measurable goals and (b) systematic professional development plans that are aligned with supports provided by the competency drivers. The adaptive dimension assures that leadership teams engage in continuous cycles of self- improvement based on the policy-practice feedback loop. The promise of closing the research-to-practice gap in birth-to-five programs and building the capacity to fully implement EBP interventions and innovations requires the leadership team, organizational drivers, and competency drivers to be aligned towards a clearly articulated, shared, and measureable goal.

Methodological Implications for Defining and Describing “Coaching”

The ability to identify the “active ingredients” in coaching, which can effectively facilitate teachers’ acquisition and mastery of new knowledge and skills, begins with operationally defining behaviors demonstrated to have an impact from the extant literature, teaching those behaviors to coaches, and measuring the extent to which coaches implement those behaviors (Isner et al., 2011; Powell & Diamond, 2013b;

Snyder et al., 2011). Although measuring fidelity of coaching implementation can achieve this to some extent, a finer grained analysis is needed to determine if the intensity of particular conversation foci or coach verbal behaviors make coaching more

204

or less effective, for whom, and under what circumstances. The present study does not link coach-teacher interactions to teacher outcomes, but takes an important next step towards the ability to perform these types of analyses in the future.

The present study also contributes to the literature through the development and application of a continuous timed-event observational coding system that can be used by coders to explore observed duration of conversation focus, coach-teacher initiations, rate of coach verbal behavior, and change in these variables across occasions.

Previous research of coaching interactions employing observational coding systems used partial interval (Jayaraman et al., 2015; Salisbury et al., 2012) or whole interval recording (Campbell & Coletti, 2013). The use of interval coding has the potential to over- or under-represent the occurrence of behaviors, and when seeking to measure both duration and number of behaviors, continuous timed-event coding is the most complete method (Yoder & Symons, 2010). In addition, prior studies coded only the coaching strategies used and not the conversational focus during which the coaching strategy or verbal behavior was implemented. By including information about the conversational focus, it is possible to make inferences about the function of the coaches’ verbal behavior. For example, not only did Probing Questions occur or not occur, but whether coaches used them to facilitate the collaborative partnership during

Personal/Pleasantries or to engage teachers in Reflection and Feedback. In addition, the documentation of coach versus teacher initiations is a small, but important first step in looking at the actions and behaviors of the person receiving the coaching.

Finally, the present study differs from prior coaching studies in that it limited the factors that may influence the coaching partnership. In particular, all coaches and

205

teachers were focused on a common set of teaching practices, and participants worked in public preschool classrooms with children ages 3 to 5 years, versus home-based early intervention (Campbell & Coletti, 2013, Salisbury et al., 2012). The 16-week coaching cycle was uniform, and information about the duration of the partnership was used to select occasions representing early, mid, and late coaching debrief meetings in the present study to explore change over time, as opposed to sessions being self- selected by the coach and representing a single occasion (Campbell & Coletti, 2013;

Jayaraman et al., 2015).

Recommendations for Future Research

The findings and implications of the present study lead to several recommendations for further research. First, an extension of the present study might include a larger and more diverse sample. Within the present study, one teacher was paired with each coach. Given the transactional nature of the coaching partnership, it would be beneficial to determine if coaches engage in similar conversation foci and verbal behaviors across teachers or if characteristics of the teacher (e.g., novice versus veteran) appear to influence the coach’s behavior. All coach participants in the present study were university-based coaches. Although the PBC framework has been scaled and implemented outside of applied research studies, insufficient data have been collected in those contexts to confirm whether similar patterns of coach behavior are present. Further research might explore whether differences are evident between coaches working directly with the developers of the coaching model in a structured research environment versus those in district or program applications. Learning about the extent to which coaches in more applied settings maintain fidelity to the protocol, engage in similar conversation foci, use similar behaviors when provided with similar

206

supports, and find the PBC framework acceptable, feasible, and useful would be important to programs and states considering the installation of PBC to support early childhood practitioners’ use of developmentally appropriate and recommended environmental, interactional, or instructional practices (DEC, 2014; NAEYC, 2009). In addition, it will be important to document how the knowledge and skills of leadership teams and the organizational policies and priorities influence the coaches’ abilities to achieve these outcomes.

Another way to expand the study sample in future research would be to increase the number of coaching debrief meetings at each occasion. Selecting additional coaching debrief meetings at each time point in future studies could provide a more accurate representation of the dyads’ interactions at each occasion and over time. A larger sample could decrease variability in the data (Yoder & Symons, 2010) and would represent a more accurate estimate of the average behaviors observed across the coaching debrief meetings.

Second, the CPOT-RVI was developed, piloted, and used in the present study.

Use of the coding system has not been replicated across multiple evidence-based teaching practices, coaching frameworks, or coders. Although the CPOT-RVI was piloted with Pyramid Model practices (Hemmeter, Snyder et al., 2016) and the present study participants were focused on embedded instruction practices, an insufficient sample of participants was included in the pilot and the present study to determine if and how the practice-focus influenced the coach’s behavior. It may also be illustrative within the field to use the CPOT-RVI to look at similarities and differences between coach behavior across different coach-based frameworks and models, isolating “active

207

ingredients” or dimensions of the coaches’ work that lead to measureable teacher or child outcomes. This line of research has been called for repeatedly (Kretlow &

Bartholomew, 2010; Powell & Diamond, 2013; Snyder et al., 2011; Snyder et al., 2012;

Sheridan et al., 2009; Zaslow et al., 2010), and remains an area for much needed investigation. Furthermore, thresholds for acceptable variability in coaching implementation should be established to determine the extent to which required components can be tailored to meet individual teachers’, needs, motivations, and preferences without compromising the ability to achieve desired outcomes.

Third, the CPOT-RVI interobserver agreement (IOA) data in the present study were collected by a graduate student with an undergraduate degree in psychology, experience working with children ages 3-5 years in Head Start, and prior experience using a behavioral observation coding system to look at child behavior. However, the student had not previously received or provided coaching or PBC. IOA data indicate the

CPOT-RVI contains some codes that have nuanced differences that might be difficult for naïve observers to discriminate (e.g., Constructive Verbal Feedback versus

Instructional Statements). Future research using the CPOT-RVI might explore if interobserver agreement differs between coders who have completed the PBC coach training, served as a PBC coach, and been trained on the CPOT-RV1 versus coders who have been trained on the CPOT-RV1 but have not had other PBC experiences.

Fourth, the present version of the CPOT-RVI does not capture teacher behavior or measure the extent to which the coaches’ behavior has a measureable impact on the teacher’s knowledge or implementation of the evidence-based practices that are the focus of coaching. Future versions of the system could include teacher verbal behavior

208

and employ sequential analyses to determine the relationship between specific conversation foci, coach verbal behaviors, and the probability that they lead to more or less sophisticated teacher participation in planning, self-reflection, self-evaluation, or problem-solving. In addition, corollary statistical analyses could be conducted to determine if the intensity of exposure to specific conversation foci or coach verbal behaviors is associated with observed changes in the teacher’s self-efficacy related to embedded instruction practice implementation or observed use of the teaching practices in the classroom.

Finally, the present study does not account for teachers’ perspectives about their experience in the coaching debrief meetings and the extent to which PBC is acceptable, feasible, and useful. Prior analyses of teacher focus group data however, indicates teachers did find PBC to be acceptable, feasible, and useful for supporting their use of embedded instruction practices in the classroom (Shannon et al.,2015). Future studies might utilize qualitative or mixed methods to understand if the teachers’ experience of working with the coach is more or less favorable when the intensity of specific coach behaviors or conversation foci vary.

Overall Summary

Early care and education that has the capacity to impact child growth and development requires ongoing, high-quality professional development that is active, cohesive, sustained, focused on instructional practices (Desimone, 2009; Desimone et al., 2002; Garet et al., 2001) and paired with systematic follow-up supports which include job-embedded practice with feedback (Joyce & Showers, 2012; NIRN, 2011;

Snyder et al., 2012; Zaslow, 2009). Practice-based Coaching (PBC) is one form of professional development, that has demonstrated effects on teacher’s implementation

209

of evidence-based teaching practices and child learning (e.g., Snyder, Hemmeter, &

Fox, 2015). The extant literature on PBC and coaching in general has emphasized the need to better understand the “active ingredients” of the coaching process which make it more or less efficacious for facilitating measureable teacher and child outcomes.

Operationally defining these ingredients and when and how they are implemented, with whom and at what cost, would assist decision makers in allocating funding for coach- based professional development, preparing coaches, and developing adaptive coaching frameworks that provide targeted and sufficient coaching supports for the individuals who need them most (Snyder et al., 2011).

The present study explored how coaches facilitated conversations during the coaching debrief meeting component of an on-site PBC partnership. A direct behavioral observation system was developed to investigate (a) the proportion of time allocated to different conversational foci, including who initiated the conversation focus; and (b) the verbal behaviors used by coaches to support teachers’ active participation in planning for, implementing, and evaluating the embedded instruction practices that were the focus of coaching. Descriptive methods were used to determine whether the conversation foci, coach and teacher initiations, and verbal behaviors changed across the three occasions (occasion 1= sessions 1-5, occasion 2= sessions 6-10, occasion 3

= sessions 11-15) as the coaching partnership developed for seven coach-teacher dyads. Data for the present study were collected in the first year of a larger efficacy trial examining Tools for Teachers (TfT), a PD intervention designed to support classroom teachers to use embedded instruction teaching practices with preschool children with disabilities (Snyder, Algina et al., 2015).

210

Results from the present study provide evidence to support the use of a coach manual, protocol, and initial training plus ongoing supports for the coaching staff, which includes fidelity feedback to ensure the coaching framework is implemented as intended

(Durlak & Dupre, 2008; Snyder, Hemmeter, & Fox, 2015; Wolery, 2011). Results also provide support for the need to have a coaching framework that is structured, but tolerant of variability in how coaches implement the required components to meet the needs of individual teachers.

In the present study sample, select coach and teacher demographics were related to some of the observed coach and teacher behaviors. Coach demographics including prior exposure to adult learning theory and coaching or consultation experience appear to be associated with the coaches’ increased use of the Problem-

Solving conversation focus and the Demonstration and Probing Question verbal behaviors, while prior experience with PBC specifically appeared to have an influence on coaches’ use of Supportive Verbal Feedback and Constructive Verbal Feedback.

Teacher demographics, including less experience in the classroom, appear to be associated with more time spent in Coaching Processing and Summarizing conversation foci and fewer conversation foci initiations, while the teacher’s not having a degree in early childhood and less experience in the classroom appears to be associated with a higher rate of Constructive Verbal Feedback.

Findings of the present study show teacher’s conversation focus initiations changed across the three occasions (early, mid, and late coaching debrief meetings).

The largest proportion of teacher initiations occurred within the Problem-Solving and

Reflection and Feedback conversation foci. In addition, the proportion of time in

211

Summarizing and Coaching Process decreased from occasion 1 to occasion 3, as the collaborative partnership developed.

Observational coding systems can be a valuable method for quantifying the process components of the coaching debrief meeting and should be used in combination with other methods to define and describe coaching and its impact in educational settings. Given the recognized importance of determining the “active ingredients” in coach-based professional development, observational coding systems are a promising methodological approach. They provide a way to explore corollary relationships between the intensity of the teachers’ exposure to key components of coaching and their impact on the teachers’ use of evidence-based practices in the classroom, where they have the potential to enhance the quality of interactional and instructional experiences for young children.

212

LIST OF REFERENCES

Abell, E., Arsiwalla, D. D., Putnam, R. I., & Miller, E. B. (2014). Mentoring and facilitating professional engagement as quality enhancement strategies: An overview and evaluation of the family child care partnerships program. Child & Youth Care Forum, 43, 569-592.

Abbott, M., Kaminski, R. A., Aguayo, K. B., & Latimer, R. J. (2014). Preschool early literacy indicators (PELI™). Eugene, OR: Dynamic Benchmark Group.

Anderson, M. (2014). Training childcare center administrators about integrated pest management through greener environmental communication venues and collecting pesticide use data in the process. Applied Environmental Education & Communication, 13, 162-170.

Artman-Meeker, K., Fettig, A., Barton, E. E., Penney, A., & Zeng, S. (2015). Applying an evidence-based framework to the early childhood coaching literature. Topics in Early Childhood Special Education, 35, 183-196.

Artman-Meeker, K. M., & Hemmeter, M. L. (2012). Effects of training and feedback on teachers’ use of classroom preventive practices. Topics in Early Childhood Special Education, 33, 112-123.

Artman-Meeker, K., Hemmeter, M. L., & Snyder, P. (2014). Effects of distance coaching on teachers' use of Pyramid Model practices: A pilot study. Infants & Young Children, 27, 325-344.

Barnett, W. S., Friedman-Krauss, A. H., Weisenfeld, G. G., Horowitz, M., Kasmin, R., & Squires, J. H. (2017). The state of preschool 2016: State preschool yearbook. New Brunswick, NJ: National Institute for Early Education Research.

Barrera, I., & Kramer, L. (2009). Using skilled dialogue to transform challenging interactions. Retrieved from: http://www.naeyc.org/files/naeyc/2012NAEYC_Ebook3.pdf

Barton, E. E., & Wolery, M. (2007). Evaluation of e-mail feedback on the verbal behaviors of pre-service teachers. Journal of Early Intervention, 30, 55-72.

Bijou, S. (1993). Behavior analysis of child development. Oakland, CA: New Harbinger.

Campbell, P. H., & Coletti, C. E. (2013). Early intervention provider use of child caregiver–teaching strategies. Infants & Young Children, 26, 235-248.

Casey, A. M., & McWilliam, R. A. (2011). The characteristics and effectiveness of feedback interventions applied in early childhood settings. Topics in Early Childhood Special Education, 31, 68-77.

213

Clements, D. H., Sarama, J., Wolfe, C. B., & Spitler, M. E. (2015). Sustainability of a scale-up intervention in early mathematics: A longitudinal evaluation of implementation fidelity. Early Education and Development, 26, 427-449.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.

Conroy, M. A., Sutherland, K. S., Algina, J. J., Wilson, R. E., Martinez, J. R., & Whalon, K. J. (2015). Measuring teacher implementation of the BEST in CLASS intervention program and corollary child outcomes. Journal of Emotional and Behavioral Disorders, 23, 144-155.

Conroy, M. A., Sutherland, K. S., Vo, A. K., Carr, S., & Ogston, P. L. (2014). Early childhood teachers’ use of effective instructional practices and the collateral effects on young children’s behavior. Journal of Positive Behavior Interventions, 16, 81-92.

Crow, R.E., & Snyder, P. (1998). Organizational behavior management in early intervention: Status and implications for research and development. Journal of Organizational Behavior Management, 18, 131-156.

Darling-Hammond, L., & Richardson, N. (2009). Research review/teacher learning: What matters. Educational Leadership, 66, 46-53.

Desimone, L. M. (2009). Improving impact studies of teachers’ professional development: Toward better conceptualizations and measures. Educational researcher, 38(3), 181-199.

Desimone, L. M., Porter, A. C., Garet, M. S., Yoon, K. S., & Birman, B. F. (2002). Effects of professional development on teachers’ instruction: Results from a three-year longitudinal study. Educational Evaluation and Policy Analysis, 24, 81- 112.

Diamond, K. E., Justice, L. M., Siegler, R. S., & Snyder, P. A. (2013). Synthesis of IES research on early intervention and early childhood education. NCSER 2013- 3001. National Center for Special Education Research.

Division for Early Childhood. (2014). DEC recommended practices in early intervention/early childhood special education 2014. Retrieved from http://www.dec- sped.org/recommendedpractices

Donovan, M. S., Bransford, J. D., & Pellegrino, J. W. (1999). How people learn: Bridging research and practice. , DC: National Academy Press.

Downer, J. T., Locasale-Crouch, J., Hamre, B., & Pianta, R. (2009). Teacher characteristics associated with responsiveness and exposure to consultation and online professional development resources. Early Education and Development, 20, 431-455.

214

Dunst, C. J., & Trivette, C. M. (2009). Capacity-building family-systems intervention practices. Journal of Family Social Work, 12, 119-143.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350.

Embedded Instruction Project. (2016). Embedded instruction cohort 1 onsite coaching focus group [Transcript]. Unpublished data. Anita Zucker Center for Excellence in Early Childhood Studies, University of Florida, Gainesville, FL.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network, University of South Florida, Tampa, FL.

Fox, L., Hemmeter, M. L., Snyder, P., Binder, D., & Clarke, S. (2011). Coaching early childhood special educators to implement a comprehensive model for promoting young children’s social competence. Topics in Early Childhood Special Education, 31, 178-192.

Frank, C. (2011). Ethnographic interviewing for teacher preparation and staff development: A field guide. , NY: Teachers College Press.

Friedman, M., Woods, J., & Salisbury, C. (2012). Caregiver coaching strategies for early intervention providers: Moving toward operational definitions. Infants & Young Children, 25, 62-82.

Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What makes professional development effective? Results from a national sample of teachers. American Educational Research Journal, 38, 915-945.

Gomez, R. E., Kagan, S. L., & Fox, E. A. (2015). Professional development of the early childhood education teaching workforce in the United States: An overview. Professional Development in Education, 41, 169-186.

Greenwood, C. R., Abbott, M., Beecher, C., Atwater, J., & Petersen, S. (2017). Development, validation, and evaluation of literacy 3D: A package supporting tier 1 preschool literacy instruction implementation and intervention. Topics in Early Childhood Special Education, 37, 29-41.

Guskey, T. R. (2003). What makes professional development effective? Phi Delta Kappan, 84, 748-750.

Halle, T., Metz, A., & Martinez-Beck, I. (Eds.), (2013). Applying implementation science in early childhood programs and systems. Baltimore, MD: Brookes.

215

Harms, T. & Clifford, R. M. (1989). Family day care rating scale. New York, NY: Teachers College.

Harms, T., Clifford, R. M., & Cryer, D. (2014). Early childhood environment rating scale. New York, NY: Teachers College.

Harris, B. M. (1980). In-service education for staff development. Needham, MA: Allyn & Bacon.

Harris, P.A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N., Conde, J.G., (2009). Research electronic data capture (REDCap): A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Information, 42, 377-381.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.

Hemmeter, M. L., Fox, L., & Snyder, P. (2013). Teaching pyramid observation tool (TPOT) for preschool classrooms manual. Baltimore, MD: Brookes.

Hemmeter, M. L., Hardy, J. K., Schnitz, A. G., Adams, J. M., & Kinder, K. A. (2015). Effects of training and coaching with performance feedback on teachers’ use of Pyramid Model practices. Topics in Early Childhood Special Education, 35, 144- 156.

Hemmeter, M. L., Snyder, P. A., Fox, L., & Algina, J. (2016). Evaluating the implementation of the Pyramid Model for promoting social-emotional competence in early childhood classrooms. Topics in Early Childhood Special Education, 36, 133-146.

Hemmeter, M. L., Snyder, P., Kinder, K., & Artman, K. (2011). Impact of performance feedback delivered via electronic mail on preschool teachers’ use of descriptive praise. Early Childhood Research Quarterly, 26, 96-109.

Hsieh, W. Y., Hemmeter, M. L., McCollum, J. A., & Ostrosky, M. M. (2009). Using coaching to increase preschool teachers’ use of emergent literacy teaching strategies. Early Childhood Research Quarterly, 24, 229-247.

Hulleman, C. S., Rimm-Kaufman, S. E., & Abry, T. D. S. (2013). Whole-part-whole: Construct validity, measurement, and analytical issues for fidelity assessment in education research. In T. Halle, A. Metz, & I Martinez-Beck, I. (Eds.), Applying implementation science in early childhood programs and systems (pp. 65-93). Baltimore, MD: Brookes.

IBM Statistical Package for Social Sciences (SPSS) Statistics for Windows (Version 24.0) [computer software]. Armonk, NY: IBM.

216

Isner, T., Tout, K., Zaslow, M., Soli, M., Quinn, K., Rothenberg, L., & Burkhauser, M. (2011, February). Coaching in early care and education programs and quality rating and improvement systems (ORIS): Identifying promising features. Washington, DC: Child Trends. Retrieved from: https://www.childtrends.org/wp- content/uploads/2013/05/2011-35CoachingQualityImprovement.pdf

Jayaraman, G., Marvin, C., Knoche, L., & Bainter, S. (2015). Coaching conversations in early childhood programs: The contributions of coach and coachee. Infants & Young Children, 28, 323-336.

Joyce, B. R., & Showers, B. (2002). Student achievement through staff development (3rd ed.). Alexandria, VA: Association for Supervision Curriculum Development.

Killion, J. (2009). Coaches’ roles, responsibilities, and reach. In J. Knight (Ed.), Coaching approaches and perspectives (pp. 7-28). Thousand Oaks, CA: Corwin.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological bulletin, 119, 254.

Knight, J. (2007). Instructional coaching: A partnership approach to improving instruction. Thousand Oaks, CA: Corwin.

Knoche, L., & Bainter, S. (2012). Early childhood coaching conversation codes. [Coding manual]. Unpublished instrument. Center for Research on Children, Youth, Families and Schools, University of Nebraska, Lincoln, Lincoln, NE.

Kretlow, A. G., & Bartholomew, C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 33, 279-299.

Landry, S. H., Anthony, J. L., Swank, P. R., & Monseque-Bailey, P. (2009). Effectiveness of comprehensive professional development for teachers of at-risk preschoolers. Journal of Educational Psychology, 101, 448-465.

Landry, S. H., Swank, P. R., Anthony, J. L., & Assel, M. A. (2011). An experimental study evaluating professional development activities within a state funded pre- kindergarten program. Reading and Writing, 24, 971-1010.

LeMoine, S. (2015) Governance and professional development systems. In S.L Kagan & R.E. Gomez, (Eds.), Early childhood governance: Choices and consequences. New York, NY: Teachers College.

Lofthouse, R., Leat, D., Towler, C., Hallet, E., & Cummings, C. (2010). Improving coaching: Evolution not revolution. Retrieved from: http://www.ncl.ac.uk/cflat/news/documents/CoachingSkillsTWFinalwebPDFv3.pdf

217

Marturana, E. R., & Woods, J. J. (2012). Technology-supported performance-based feedback for early intervention home visiting. Topics in Early Childhood Special Education, 32, 14-23.

Martinez-Beck, I. (2013) Introduction: Where is the new frontier of implementation science in early care and education research and practice? In T. Halle, A. Metz, & I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. xix-xxx). Baltimore, MD: Brookes.

McLean, M., Sandall, S. R., & Smith, B. J. (2016). A history of early childhood special education. In B. Reichow, B. A. Boyd, E. Barton, & S. L. Odum (Eds.), Handbook of early childhood special education (pp. 3-19). Basel, Switzerland: Springer International.

McCollum, J. A., & Catlett, C. (1997). Designing effective personnel preparation for early intervention: Theoretical frameworks. In P.J. Winton, J.A. McCollum, & C. Catlett (Eds.), Reforming personnel preparation in early intervention: Issues, models, and practical strategies (105-126). Baltimore, MD: Brookes.

McCollum, J. A., Hemmeter, M. L., & Hsieh, W. Y. (2011). Coaching teachers for emergent literacy instruction using performance-based feedback. Topics in Early Childhood Special Education, 33, 28-37.

Metz, A., Halle, T., Bartley, L., & Blasberg, A. (2013). The key components of successful implementation. Applying implementation science in early childhood programs and systems. In T. Halle, A. Metz, & I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. 21-42). Baltimore, MD: Brookes.

Moher D., Liberati A., Tetzlaff J., Altman D.G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ, 339, http://doi.org/10.11136/bmj.b2535

Moyers, T. B., Martin, T., Christopher, P. J., Houck, J. M., Tonigan, J. S., & Amrhein, P. C. (2007). Client language as a mediator of motivational interviewing efficacy: Where is the evidence? Alcoholism: Clinical and Experimental Research, 31, 40- 47.

National Association for the Education of Young Children (2015). Early childhood educators: Advancing the profession [PowerPoint slides]. Retrieved from https://www.naeyc.org/files/naeyc/Key%20Findings%20Presentation.NAEYC_.pd f

National Association for the Education of Young Children (2009). Developmentally appropriate practice in early childhood programs serving children birth through age 8. [Position Statement]. Retrieved from https://www.naeyc.org/files/naeyc/file/positions/PSDAP.pdf

218

National Professional Development Center on Inclusion. (2008). What do we mean by professional development in the early childhood field? FPG Child Development Institute, The University of , Chapel Hill, NC.

National Research Council. (2000) How people learn: Brain, mind, experience, and school: Expanded edition. Washington, DC: National Academies.

Neuman, S. B., & Cunningham, L. (2009). The impact of professional development and coaching on early language and literacy instructional practices. American Educational Research Journal, 46, 532-566.

Neuman, S. B., Dwyer, J., & Koh, S. (2007). The child/home early language & literacy observation (CHELLO) tool. Baltimore, MD: Brookes.

Neuman, S. B., & Wright, T. S. (2010). Promoting language and literacy development for early childhood educators: A mixed-methods study of coursework and coaching. The Elementary School Journal, 111, 63-86.

Noldus Information Technology Observer XT (Version 12.0) [software]. Wageningen, Netherlands.

Oborn, K. M. K., & Johnson, L. D. (2015). Coaching via electronic performance feedback to support home visitors’ use of caregiver coaching strategies. Topics in Early Childhood Special Education, 35, 157-169.

Passarelli, A. M. (2015). Vision-based coaching: Optimizing resources for leader development. Frontiers in Psychology, 6, http://doi.org/10.3389/fpsyg.2015.00412

Pellegrini, A. D., Symons, F., & Hoch, J. (2014). Observing children in their natural worlds: A methodological primer. Florence, KY: Psychology Press.

Peterson, S. M. (2013). Readiness to change: Effective implementation processes for meeting people where they are. In T. Halle, A. Metz, & I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. 43-64). Baltimore, MD: Brookes.

Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom assessment scoring system: Manual pre-k. Charlottesville, VA: Teachstone.

Pianta, R. C., Mashburn, A. J., Downer, J. T., Hamre, B. K., & Justice, L. (2008). Effects of web-mediated professional development resources on teacher–child interactions in pre-kindergarten classrooms. Early Childhood Research Quarterly, 23, 431-451.

219

Pianta, R. C., DeCoster, J., Cabell, S., Burchinal, M., Hamre, B. K., Downer, J., ... & Howes, C. (2014). Dose–response relations between preschool teachers’ exposure to components of professional development and increases in quality of their interactions with children. Early Childhood Research Quarterly, 29, 499-508.

Powell, D. R., & Diamond, K. E. (2013). Implementation fidelity of a coaching-based professional development program for improving Head Start teachers’ literacy and language instruction. Journal of Early Intervention, 35, 102-128.

Powell, D. R., & Diamond, K. E. (2013b). Studying the implementation of coach-based professional development. In T. Halle, A. Metz, & I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. 97-116). Baltimore, MD: Brookes.

Powell, D. R., Diamond, K. E., Burchinal, M. R., & Koehler, M. J. (2010). Effects of an early literacy professional development intervention on Head Start teachers and children. Journal of Educational Psychology, 102, 299-312.

Powell, D. R., Steed, E. A., & Diamond, K. E. (2010). Dimensions of literacy coaching with Head Start teachers. Topics in Early Childhood Special Education, 30, 148- 161.

Rush, D. D., & Shelden, M. L. L. (2011). The early childhood coaching handbook. Baltimore, MD: Brookes.

Salisbury, C., Cambray-Engstrom, E., & Woods, J. (2012). Providers’ reported and actual use of coaching strategies in natural environments. Topics in Early Childhood Special Education, 32, 88-98.

Salisbury, C., Cambray-Engstrom, E., Woods, J., & Friedman, M. (2008). Routine and instructional strategy coding protocol [Manual and protocol]. Unpublished instrument. Child & Family Development Center, University of –Chicago.

Salisbury, C., Woods, J., Snyder, P., Moddelmog, K., Mawdsley, H., Romano, M., & Windsor, K. (2017). Caregiver and provider experiences with coaching and embedded intervention. Topics in Early Childhood Special Education, http://doi.org/10.1177/0271121417708036

Sameroff, A. (2009). The transactional model of development: How children and contexts shape each other. In A. Sameroff, (Ed.), The transactional model (pp. 3- 21). Washington, DC: American Psychological Association.

Schachter, R. E. (2015). An analytic study of the professional development research in early childhood education. Early Education and Development, 26, 1057-1085.

Scheeler, M. C., Ruhl, K. L., & McAfee, J. K. (2004). Providing performance feedback to teachers: A review. Teacher Education and Special Education, 27, 396-407.

220

Shannon, D. & Snyder, P. (2016). Coaching practices observation tool: Research Version I (CPOT-RVI) [Manual and training videos]. Unpublished instrumnet. Anita Zucker Center for Excellence in Early Childhood Studies, University of Florida, Gainesville, FL.

Shannon, D., Snyder, P., & McLaughlin, T. (2015). Preschool teachers’ insights about web-based self-coaching versus on-site expert coaching. Professional Development in Education, 41, 290-309.

Sheridan, S. M., Edwards, C. P., Marvin, C. A., & Knoche, L. L. (2009). Professional development in early childhood programs: Process issues and research needs. Early education and development, 20, 377-401.

Shonkoff, J. P. (2010). Building a new biodevelopmental framework to guide the future of early childhood policy. Child development, 81, 357-367.

Smith, M. W., & Dickinson, D. K. (2002). Early language & literacy classroom observation (ELLCO) toolkit, Research Edition. Baltimore, MD: Brookes.

Smith, R. E., Smoll, F. L., & Hunt, E. (1977). A system for the behavioral assessment of athletic coaches. Research Quarterly. American Alliance for Health, Physical Education and Recreation, 48, 401-407.

Snyder, P. A., Algina, J., Hemmeter, M. L. & McLean, M. E. (2015). Impact of professional development on preschool teachers’ use of embedded instruction practices: An efficacy trial of Tools for Teachers. Funded Research Grant: Institute for Education Sciences [R324A150076].

Snyder, P.A., Hemmeter, M.L., Bishop, C. Shannon, D. & McLean, M. (2015). Embedded instruction for early learning coach manual [Manual and protocols]. Unpublished manual. Anita Zucker Center for Excellence in Early Childhood Studies, University of Florida, Gainesville, FL.

Snyder, P. A., Hemmeter, M. L., & Fox, L. (2015). Supporting implementation of evidence-based practices through practice-based coaching. Topics in Early Childhood Special Education, 35, 133-143.

Snyder, P., Hemmeter, M. L., & McLaughlin, T. (2011). Professional development in early childhood intervention where we stand on the silver anniversary of PL 99- 457. Journal of Early Intervention, 33, 357-370.

Snyder, P., Hemmeter, M. L., McLean, M. E., Sandall, S., & McLaughlin, T. (2013). Embedded instruction to support early learning in response-to-intervention frameworks. In V. Buysse & E. Peisner-Feinberg (Eds.), Handbook of response to intervention in early childhood (pp. 283–298). Baltimore, MD: Brookes.

221

Snyder, P., Hemmeter, M. L., McLean, M. E., Sandall, S., McLaughlin, T., & Algina, J. (2017). Impact of Professional Development on Preschool Teacher’s Use of Embedded Instruction Practices (Manuscript in submission).

Snyder, P., Hemmeter, M. L., Meeker, K. A., Kinder, K., Pasia, C., & McLaughlin, T. (2012). Characterizing key features of the early childhood professional development literature. Infants & Young Children, 25, 188-212.

Snyder, P., Hemmeter, M. L., Sandall, S., McLean, M., Emery, A. K., Rakap, S., McLaughlin, T., & Embedded Instruction for Early Learning Project. (2009). Coaching preschool teachers to use embedded instruction practices [Manual and coaching protocols]. Unpublished manual. College of Education, University of Florida, Gainesville, FL.

Snyder, P., & Wolfe, B. (2008). The big three process components of effective professional development: Needs assessment, evaluation, and follow-up. In P.J. Winton, J.A. McCollum, & C. Catlett (Eds.), Practical approaches to early childhood professional development: Evidence, strategies, and resources (pp. 13-51). Washington, DC: Zero to Three.

Sutherland, K. S., Conroy, M. A., Vo, A., & Ladwig, C. (2015). Implementation integrity of practice-based coaching: Preliminary results from the BEST in CLASS efficacy trial. School Mental Health, 7(1), 21-33.

Teaching Pyramid Research Project (2013). Teaching pyramid intervention: Coaching manual [Manual and coaching protocols]. Unpublished manual. College of Education, University of Florida, Gainesville, FL.

Thurlings, M., Vermeulen, M., Bastiaens, T., & Stijnen, S. (2013). Understanding feedback: A learning theory perspective. Educational Research Review, 9, 1-15.

Trivette, C. M., Dunst, C. J., Hamby, D. W., & O’herin, C. E. (2009). Characteristics and consequences of adult learning methods and strategies. Winterberry research syntheses, 2(2), 1-33.

Wasik, B. A., & Hindman, A. H. (2011). Improving vocabulary and pre-literacy skills of at-risk preschoolers through teacher professional development. Journal of Educational Psychology, 103, 455-469.

Winton, P. J., Snyder, P., & Goffin, S. (2015). Beyond the status quo: Rethinking professional development for early childhood teachers. In L. Couse & S. Recchia (Eds.), Handbook of early childhood teacher education (pp. 54-68). New York, NY: Routledge.

Wolery, M. (2011). Intervention research: The importance of fidelity measurement. Topics in Early Childhood Special Education, 31, 155-157.

222

Woods, J. (2005). A family-guided, routines-based model for caregiver consultation (FGRBI). Tallahassee: Florida State University, Department of Communication Disorders, Florida State University, Tallahassee, FL.

Yoder, P., & Symons, F. (2010). Observational measurement of behavior. New York, NY: Springer.

Yoshikawa, H., Weiland, C. Brooks-Gunn, J., Burchinal, M. R., Espinosa, L. M., Gormley, W. T., ... & Zaslow, M. J. (October, 2013). Investing in our future: The evidence base on preschool education (Vol. 9). Society for Research in Child Development and Foundation for Child Development. Retrieved from: https://www.fcd-us.org/the-evidence-base-on-preschool/

Zan, B., & Donegan-Ritter, M. (2014). Reflecting, coaching and mentoring to enhance teacher–child interactions in Head Start classrooms. Early Childhood Education Journal, 42, 93-104.

Zaslow, M. J. (2009). Strengthening the conceptualization of early childhood professional development initiatives and evaluations. Early Education and Development, 20, 527-536.

Zaslow, M., Tout, K., Halle, T., Whittaker, J. V., & Lavelle, B. (2010). Toward the identification of features of effective professional development for early childhood educators [Literature Review]. Office of Planning, Evaluation and Policy Development, US Department of Education. Retrieved from: https://www2.ed.gov/rschstat/eval/professional-development/literature-review.pdf

223

BIOGRAPHICAL SKETCH

Darbianne Shannon earned her doctorate of philosophy from the University of

Florida in 2017. Her major area of study was curriculum and instruction with a specialization in early childhood studies. Darbianne’s research interests include (a) preservice and inservice professional development and coaching, that support teachers’ acquisition and sustained use of evidence-based practices and (b) system-level policies and mechanisms designed to build capacity and enhance continuous improvement.

Prior to enrolling in the doctoral program at the University of Florida, Darbianne had 10 years of classroom experience working with children age three to third grade in the state of Florida. She earned her master’s degree in reading, with a specialization in early language and literacy acquisition and reading disabilities, from the University of

South Florida in 2007 and bachelor of science in pre-kindergarten, primary, and elementary education from Florida Southern College in 2004.

Following the completion of her doctoral degree, Darbianne joined the Anita

Zucker Center for Excellence in Early Childhood Studies at the University of Florida as a post-doctoral associate. The Center is an interdisciplinary center dedicated to advancing knowledge, policy, and practices in early childhood studies.

224