Quick viewing(Text Mode)

Comparing Participatory and Direct Instructional Types of Interdisciplinary

Comparing Participatory and Direct Instructional Types of Interdisciplinary

Comparing Participatory and Direct Instructional Types of Interdisciplinary

Health Sciences and Professions Students’ Perceived Achievement

in a Group Module Project

A dissertation presented to

the faculty of

The Patton College of of Ohio University

In partial fulfillment

of the requirements for the degree

Doctor of Philosophy

John F. K. Ekpe

August 2016

© 2016 John F. K. Ekpe. All Rights Reserved. 2

This dissertation titled

Comparing Participatory and Direct Instructional Types of Interdisciplinary

Health Sciences and Professions Students’ Perceived Achievement

in a Group Module Project

by

JOHN F. K. EKPE

has been approved for

the Department of Educational Studies

and The Patton College of Education by

David R. Moore

Professor of Educational Studies

Renée A. Middleton

Dean, The Patton College of Education 3

Abstract

EKPE, JOHN F. K., Ph.D., August 2016, Curriculum and Instruction,

Instructional Technology

Comparing Participatory and Direct Instructional Types of Interdisciplinary Health

Sciences and Professions Students’ Perceived Achievement in a Group Module Project

(572 pp.)

Director of Dissertation: David R. Moore

The purpose of this quasi-experimental study was to investigate the use of a participatory instruction as a means of teaching Institute of Medicine (IOM) standards of a group module project, compared to the teaching of the same standards using . The students’ final perceived achievement score, instructional type (two levels), team preference (two levels), major (six levels), and inter-professional (IP) team

(nine levels) and standards (five levels) were considered. Students’ final perceived achievement, students’ initial perceived achievement (students’ perceived self-efficacy), self-concept gains, and differences by team preference, major, and inter-professional teams were also analyzed.

Students in this study were from three intact classes of - three cohorts group of

2013/2014 academic year (the experimental group) and three intact classes of three cohorts group of 2014/2015 academic year (comparative group). The sample consisted of 90 students. The experimental group used participatory instruction for 14 weeks of the semester. This is a value-driven, practice-based type of instruction, which allowed the students to interact in their teams. The comparative group, not taught using 4 participatory instruction, was taught the same standards using a direct instruction method

(primarily lecture).

A quasi-experimental non-equivalent control group design was used. The dependent variable was final perceived achievement score. The independent variables were initial perceived achievement score (covariate), instructional methods, team preference, major, and IP teams. Pre-survey and post-survey scores on IOM Self-

Reported Knowledge Achievement scales were analyzed.

Analysis of the data revealed that the participatory group gained significantly, while they improved on their final perceived achievement and self-concepts scores. The participatory group working in teams, when compared to their direct instruction counterparts, increased significantly in their change in perceived achievement scores.

It is recommended that further studies be conducted to investigate students’ self- concepts, perceived self-efficacy and perceived achievement levels of the students using participatory instruction at all levels of inter-professional and some implications were provided. 5

Dedication

This dissertation is dedicated to David and John, thank you for everything.

6

Acknowledgments

Professor David Richard Moore, thank you for being my advisor, a dissertation chair, a constant source of encouragement, and a friend; without you I could not have been enrolled and completed this journey. I would also like to thank my committee members Professor Emerita, Teresa Franklin and Dr. Krisanna Machtmes for your advice, time, and thoughts that you have put into this study. The same thanks go to Dr.

John McCarthy who has been very encouraging, allowing me to use the HSP 4510/5510 project database, and keeping me on the MedTAPP Healthcare Access Initiative grant from the beginning to the completion of my journey. Each of you has inspired me throughout the years I have known you. You have influenced my life in various ways.

You are very wonderful committee members; and I am very appreciative of all your time, supports, and advices given me from my Program of Study to the end of my Dissertation journey.

I would also like to thank other professors who taught and mentored me what I know about research at various stages toward this dissertation journey. The names include Drs. Gordon P. Brooks, Adah Ward-Randolph, Seann Dikkers, Ong Kim Lee,

Ronaldo Vigo, Mark Alicke, Bruce Carlson, and Dwan V. Robinson. Other names are

Chris Hitchcock, Gabriela Castaneda-Jimenez, Lara Walace, Reuben Asempapa, Samuel

Antwi, and Rashmi Sharma. Also, I could never have completed this journey without the support from my course-mates; my twin brothers (Reuben and Simeon); and Emily and

Diana. 7

Table of Contents

Page

Abstract ...... 3 Dedication ...... 5 Acknowledgments...... 6 List of Tables ...... 17 List of Figures ...... 21 Chapter 1: Introduction ...... 22 Background ...... 22 Statement of the Problem ...... 30 Purpose of the Study ...... 31 Significance of the Study ...... 31 Research Questions ...... 32 Hypotheses ...... 34 Hypothesis 1...... 34 Hypothesis 2...... 34 Hypothesis 2a ...... 35 Hypothesis 2b...... 35 Hypothesis 2c ...... 36 Hypothesis 2d...... 36 Hypothesis 3...... 37 Hypothesis 4...... 37 Hypothesis 5...... 38 Hypothesis 6...... 38 Hypothesis 7...... 39 Hypothesis 8...... 39 Hypothesis 9...... 40 Limitations of the Study...... 40 Delimitations of the Study ...... 42 Definition of Terms...... 43 8

Organization of the Study ...... 48 Chapter 2: Review of Literature...... 50 Theoretical Perspective ...... 50 Ontological Literature ...... 50 Epistemological Literature ...... 51 Axiological Literature ...... 52 A Theory of Practice ...... 52 Parameters for the Metatheory of Instruction ...... 54 Conditions ...... 54 Characteristics of Instructional Theories and Models ...... 55 Descriptive and prescriptive research ...... 57 Learning Theory...... 59 Instructional Concepts...... 59 Origins of an Instructional Theory and its Later Influence...... 59 Method ...... 63 Materials television tapes...... 63 Size of demonstration unit...... 63 Provisions for review and practice...... 63 Mode of practice...... 64 Demonstration content ...... 64 Design of the Experiment ...... 64 Procedure ...... 64 Sample...... 65 Learning Principles ...... 67 Curriculum ...... 68 Constructivist Instructional Principles ...... 71 Cognitive Entry Behaviors as Causal Variables ...... 71 Learning Styles ...... 73 Learning styles definition...... 73 Issues and controversies of learning styles ...... 73 Learning styles implications for education ...... 75 9

Assumptions of learning style...... 75 Learning styles pros and cons ...... 76 Self-Efficacy ...... 77 Quality of Instruction ...... 80 Quality of instruction as a causal variable ...... 82 Achievement ...... 83 Nature of the meaning of achievement ...... 83 Age ...... 85 History of a Student ...... 85 Alterability of Learning ...... 86 Direct Instruction Approach...... 87 Student-Driven Participatory Approach...... 88 The goal of student...... 90 The role of the instructor...... 90 Student-centered instructional environment ...... 92 Problem- or case-based learning ...... 92 Project-based learning...... 93 Impact of the Different Approaches for Inter-Professional Healthcare Classroom...... 96 Learning: Not a Linear Process...... 97 A Process of Making Personal Meaning...... 98 Choosing the Participatory Approach ...... 98 Direct Instruction versus Participatory Instruction ...... 99 Direct instruction...... 99 Participatory intervention model...... 100 Competency-Based Learning ...... 103 Competence...... 104 Professional competence...... 104 Competency ...... 104 Competency-based education ...... 104 Core competency...... 105 Competency areas ...... 105 10

Reasons for competency-based learning...... 105 Reasons for competency-learning being important ...... 106 Reasons for moving to competency-based learning...... 107 Critiques for competency-based learning ...... 108 Solutions to the critiques for competency-based learning ...... 109 Collaborative Learning ...... 110 Strengths for collaborative learning ...... 111 Cooperative Learning...... 113 Cooperative learning methods ...... 117 Forms of cooperative learning ...... 119 Learning together ...... 119 Discussion groups ...... 119 Group projects...... 120 Informal cooperative methods ...... 120 Elements in a cooperative classroom management...... 121 Strengths for cooperative learning ...... 122 Barriers for cooperative learning ...... 123 Implication for instructors...... 123 Classroom Assessment...... 124 Peer assessment...... 125 Strengths of using peer assessment ...... 127 Challenges of peer assessment ...... 129 The role for instructor ...... 129 The role of a student...... 129 Group Facilitation Theories and Models for Practice ...... 130 Tutelary Authority as Initiation...... 130 Open learning ...... 131 ...... 131 Real learning ...... 131 Peer learning ...... 132 Multi-stranded curriculum ...... 132 11

Contract learning...... 132 Resource consultancy...... 133 Guardianship ...... 133 Political Authority as Initiation...... 134 The Three Decision-Modes and Five Elements of Learning Process ...... 135 Direction...... 135 Negotiation...... 135 Delegation ...... 136 Charismatic Authority as Initiation...... 136 The role of the facilitator ...... 137 Aspects of task ...... 137 Aspects of process...... 137 Skills...... 138 Personal change...... 139 Participatory Evaluation...... 139 Participation ...... 140 Participatory evaluation ...... 141 Forms of participatory evaluation ...... 142 Issues for consideration in participatory evaluation ...... 143 Principles of participatory evaluation ...... 143 Key elements of the participatory evaluation process ...... 144 Participatory Action Research ...... 145 Participatory Culture ...... 146 Participatory Observation and Sense-Making ...... 146 Characteristics of participatory evaluation ...... 147 The scope and depth of the participatory evaluation ...... 148 Appreciative Inquiry (AI) as a Method for Participatory Change ...... 148 Program effectiveness ...... 149 Methodological Literature...... 150 Rhetorical Literature ...... 150 Summary ...... 151 12

Achievement, instruction, and learning ...... 152 Self-efficacy ...... 152 Grouping/team and achievement...... 153 Self-concept, self-efficacy, learning styles, and achievement ...... 155 Implication for learning styles ...... 155 Implications for IP team...... 156 Implication for curriculum ...... 157 Chapter 3: Methodology ...... 159 Introduction ...... 159 Research Design...... 159 A nonequivalent control group design ...... 159 Population ...... 161 Sampling Plan ...... 164 Sample Size Selection ...... 166 Sample size ...... 167 The grant ...... 169 Instrument ...... 169 Data Collection ...... 170 Weekly students’ journals ...... 170 Data Collection Procedure ...... 172 Data Analysis Procedure Phase 1- Quantitative Data ...... 173 Assumptions...... 173 Statistical assumptions ...... 174 Outliers in the analysis ...... 181 Comparison of the mean for the two groups...... 184 Data Analysis Procedure: Phase 2-Qualitative Data...... 187 Qualitative journal reflection ...... 187 Respondent validation...... 188 Triangulation ...... 189 Aspects of triangulation ...... 189 Pilot Study...... 189 13

IRB Procedures...... 189 Subject-matter expert ...... 190 Participatory classroom environment...... 190 The content...... 191 Learning objectives of HSP 5510 course ...... 191 Sample module on the IOM core competencies (IOM Standards) ...... 193 Sample module iBook project...... 195 Component 1 ...... 195 Component 2 ...... 195 Component 3 ...... 196 Component 4 ...... 196 Component 5 ...... 196 Class activities...... 196 Participatory learning activities ...... 196 Instructional objective...... 196 Mode of practice ...... 196 Patience-centered care...... 198 Interdisciplinary team ...... 199 Evidence-based practice...... 199 Quality improvement ...... 199 Informatics ...... 199 Data retrieved ...... 199 Scoring level of perceived achievement ...... 199 Phase 1 pilot study- quantitative data analysis ...... 200 Criteria for in phase 1 ...... 200 Phase 2 pilot study- qualitative data analyses ...... 201 Case selection...... 201 Pilot study results ...... 201 Chapter 4: Results ...... 203 Research Questions and Hypotheses ...... 203 Research question 1 ...... 203 14

Hypothesis 1...... 203 Research question 2 ...... 203 Hypothesis 2a ...... 204 Hypothesis 2b...... 204 Hypothesis 2c ...... 205 Hypothesis 2d...... 205 Research question 3 ...... 206 Hypothesis 3...... 206 Research question 4 ...... 206 Hypothesis 4...... 206 Research question 5 ...... 207 Hypothesis 5...... 207 Research question 6 ...... 208 Hypothesis 6...... 208 Research question 7 ...... 208 Hypothesis 7...... 208 Research question 8 ...... 209 Hypothesis 8...... 209 Research question 9 ...... 209 Hypothesis 9...... 210 Research question 10 ...... 210 Preliminary Data Analyses...... 210 Reporting the results of data screening ...... 210 Testing for assumptions of ANOVA and ANCOVA ...... 211 Research Questions and Findings ...... 223 Qualitative Data Analysis ...... 290 Students’ Comments on Survey by Standards ...... 296 Patient-centered care knowledge ...... 296 Interdisciplinary teamwork ...... 297 Evidence-based practice...... 298 Quality improvement ...... 298 15

Informatics ...... 299 Challenges/Benefits for HSP Students Learning/Working Together ...... 300 Problems of working in teams ...... 300 Benefits of working in teams ...... 300 Problems of working alone ...... 301 Benefits of working alone ...... 302 Post Perceptions of other Disciplines ...... 303 Nursing...... 303 Main Findings ...... 307 Summary ...... 313 Chapter 5: Discussions and Conclusion...... 320 Scope ...... 320 Discussion ...... 322 Discussion of findings...... 322 Research question 1 ...... 323 Research question 2 ...... 326 Research question 3 ...... 341 Research question 4 ...... 343 Research question 5 ...... 345 Research question 6 ...... 347 Research question 7 ...... 350 Research question 8 ...... 352 Research question 9 ...... 354 Limitation...... 357 Implications...... 360 Implications for students being paid ...... 364 Recommendations ...... 364 Suggestions for Future Studies ...... 367 Conclusion ...... 370 Chapter Summary ...... 371 References ...... 373 16

Appendix A. Practice Participatory Instruction ...... 394 Appendix B. Syllabus for Participatory Learning Instruction ...... 397 Appendix C. Syllabus for Direct Instruction: ...... 406 Appendix D. IOM Self-Reported Knowledge Achievement (IOMSKA) Survey ...... 414 Appendix E. Permission to use MedTAPP Database ...... 418 Appendix F. IRB Approval Letter ...... 419 Appendix G. Pilot Study ...... 420 Appendix H. Literature Tables...... 460 Appendix I. First Data Cleaning-N93 ...... 479 Appendix J. Second Data Cleaning-N90 ...... 513 Appendix K. Analysis of Students’ Journal Reflections ...... 533 Appendix L. Qualitative Data ...... 550 Appendix M. Testing for Assumptions...... 568 Appendix N. Original and Revised Topic and Research Questions ...... 569 Appendix O. Funding Sources for the HSP Program ...... 572

17

List of Tables

Page

Table 1 Number of Students in the Population and Sample for Pilot and Main Studies 163

Table 2 Means, Standard Deviations, Cronbach’s Alpha, and Correlations: Pre-Survey (N = 90)...... 214

Table 3 Means, Standard Deviations, Cronbach’s Alpha, and Correlations: Post-Survey (N = 90)...... 215

Table 4 Demographic Data, by Cohort ...... 216

Table 5 Frequency Distribution of Demographic Data, by Status ...... 217

Table 6 Frequency Distribution of Demographic Data, by Gender ...... 218

Table 7 Frequency Distribution of Demographic Data, by Age...... 219

Table 8 Frequency Distribution of Demographic Data, by Major ...... 220

Table 9 Frequency Distribution of Demographic Data, by Team Preference ...... 221

Table 10 Frequency Distribution of Demographic Data, by Interdisciplinary (IP) Team ...... 222

Table 11 Change Scores, Independent Samples T Test Results, and Effect Sizes for Students’ Self-Concept and Perceived Achievement Due to Instructional Methods ...... 224

Table 12 Results of an Independent Samples t-Test for Change Scores of Students’ Self- concept and Achievement Due to Instructional Methods and Working in Teams ...... 226

Table 13 Results of an Independent Samples t-test for Change Scores of Students’ Self- concept and Achievement Due to Instructional Methods and Team Preference (Working Alone, n = 27) ...... 229

Table 14 Comparison of Students’ Overall Post-Perceived Achievement Scores, by Team Preference (Working in Teams and Working Alone) and Participatory Instructional Type ...... 231

Table 15 Comparison of Students’ Overall Perceived Achievement Change Scores, by Team Preference (Working in Teams and Working Alone) and Direct Instructional Type ...... 234 18

Table 16 Initial Perceived Achievement Means and Final Perceived Achievement Means by Participatory Group (n = 40) and Major ...... 237

Table 17 Effect of Participatory Instruction on Final Perceived Achievement Scores (ANOVA), by Major ...... 238

Table 18 Interaction between Covariate and Major Using GLM -ANCOVA Type III SS, by Participatory Group ...... 239

Table 19 Effect of Participatory Instruction on Final Perceived Achievement Scores, Controlling for Initial Perceived Achievement Scores, by Major...... 240

Table 20 Parameter Estimates of Final Perceived Achievement Scores, by Participatory Group and Major ...... 241

Table 21 Initial Perceived Achievement Means and Final Perceived Achievement Means, by Direct Group and Major ...... 244

Table 22 Effect of Direct Instruction on Final Perceived Achievement Scores (ANOVA), by Major ...... 245

Table 23 Interaction between Covariate and Major Using GLM -ANCOVA Type III SS, by Direct Group ...... 246

Table 24 Effect of Direct Instruction on Final Perceived Achievement Scores, Controlling for Initial Perceived Achievement Scores, by Major ...... 247

Table 25 Parameter Estimates of Final Perceived Achievement Score, by Direct Group and Major...... 248

Table 26 Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on fpAch, Major, Instructional Types, and ipAch ...... 251

Table 27 Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on fpAch, Major, and Instructional Types ...... 252

Table 28 Summary of Hierarchical Regression Analysis for Variables ipAch, fpAch, and Major as a Function of Instruction ...... 255

Table 29 Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a function of Instruction, Team Preference (N = 90) .... 258

Table 30 Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a function of Instruction, Team Preference (N = 90)259 19

Table 31 Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived Achievement, by Team Preference (N = 90) ...... 261

Table 32 Initial Perceived Achievement Means and Final Perceived Achievement Means, by Participatory Group and IP Team ...... 263

Table 33 Effect of Participatory Instruction on Final Perceived Achievement Mean, by Inter-Profession Teams (ANOVA) ...... 265

Table 34 Interaction between Covariate and Inter-professional Team Using GLM - ANCOVA Type III SS, by Participatory Group ...... 266

Table 35 Effect of Participatory Instruction on Final Perceived Achievement Scores, Controlling for Initial Perceived Achievement Scores, by IP Team ...... 267

Table 36 Parameter Estimates of Final Perceived Achievement Score, by Participatory Group and IP Team ...... 268

Table 37 Initial, Final, and Adjusted Perceived Achievement Means, and Standard Deviations of Students’ IP Team, by Direct Instruction (n = 50)...... 272

Table 38 Effect of Direct Instruction on Students’ Initial and Final Perceived IP Team Achievement Scores (ANOVA) ...... 273

Table 39 Interaction between Covariate and IP Team Using GLM ANCOVA Type III SS, by Direct Instruction ...... 274

Table 40 Effect of Direct Instruction on Students’ Final Perceived IP Team Achievement Scores, Controlling for their initial Perceived IP Team Achievement Scores (ANCOVA) ...... 275

Table 41 Parameter Estimates of Final Perceived IP Team Achievement Variable for Direct Instruction ...... 277

Table 42 Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a Function of Instruction, by Inter-professional Team .. 281

Table 43 Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a Function of Instruction, by Inter-Profession Team 282

Table 44 Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived Achievement, by IP Team...... 285

Table 45 Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived Achievement, all of the Demographics Variables ...... 287 20

Table 46 Demographic Data for the Selected HSP Students, by Two Instructional Types ...... 292

Table 47 Codes, Coding, and Keywords, by Standards...... 295

Table 48 Summary Table of Independent Samples t-Test Results ...... 314

Table 49 Summary Table of ANOVA and ANCOVA Results ...... 315

Table 50 Summary of Zero-Order and Partial Correlations Results, by Major ...... 316

Table 51 Summary of Zero-Order and Partial Correlations Results, by Team Preference ...... 317

Table 52 Summary of Zero-Order and Partial Correlations Results, by IP Teams ...... 318

Table 53 Summary Table of Hierarchical Multiple Regression Results...... 319

21

List of Figures

Page

Figure 1. Inter-relationships among theory, research, and practice...... 57

Figure 2. Showing quasi-experimental design- a nonequivalent control group ...... 159

Figure 3. Showing sampling design for the phases 1 and 2 ...... 168

Figure 4. Showing the weeks the pre-survey and post-survey questions administered . 172

Figure 5. Example of students’ work showing various components of IOM standards of a group module project ...... 195

22

Chapter 1: Introduction

Background

For over a decade, the 2001 Institute of Medicine (IOM) report, Crossing the

Quality Chasm: A New Health System for the 21st Century, has called for reform in all disciplines of education for health professions (Greiner & Knebel, 2003). The reform emphasizes that all health professionals should be educated to master five core competencies, namely a) providing “patient-centered care”; b) working in

“interdisciplinary teams”; c) employing “evidence-based practice”; d) applying “quality improvement”; and e) utilizing “informatics” (Greiner & Knebel, 2003, p. 1). In recent years, McGillan, Tarini, and Small (2001) report, the health professions education

(education system, health system and health care system) has become one of the most important concerns for the stakeholders in America. “More than half (58%) of health providers and administrators”, and “about 95% of physicians reported that there have been medical errors (diagnostic, treatment, and preventive)” (McGillan et al., 2001, p. 1).

Other researchers report that the healthcare system was poorly coordinated, ineffective, patient-centered (Davis, Schoen, & Stremikis, 2010), very expensive, and lacked transparency (Binder, 2013). Additionally, there is mismatch between school practice culture and health working culture (Frenk et al., 2010).

To address this problem, there have been concerns from stakeholders about how the health professions education should be radically transformed to provide good quality health care for every American (Buchbinder & Thompson, 2010). In addition, President

Obama enacted two health reform legislations: the American Recovery and Reinvestment 23

Act 2009 provided financial supports for improving health information technology; and the Patient Protection and Affordability Care Act 2010 to realign providers’ financial incentives, to encourage more efficient organization, to deliver healthcare, and to invest in preventive and population health (Davis et al., 2010). Further, several organizations, for example, Joint Commission on Accreditation of Healthcare Organizations, National

Organization for Associate Degree Nursing, and National Organization of Nurse

Practitioner Faculties are meeting to work on reform strategies and actions necessary to find solutions to the problem (Greiner & Knebel, 2003).

Furthermore, several researchers from various fields expressed their concerns about school practice culture, including health professions education (Greiner & Knebel,

2003); medicine (Barr, 2009; Epstein & Hundert, 2002; Hundert, Hafferty, & Christakis,

1996; Leach, 2000; Prideaux, 2009; Smith, 2009); education (Chiu, 2004; Hughes, 2011;

Lutze-Mann, 2014; Topping, 1998); engineering education (Felder & Brent, 2003, 2004,

2007); student-centered learning (Oakley, Felder, Brent, & Elhaji, 2004); teaching and learning (Asgari & Dall’Alba, 2011); learning and instruction (Strijbos, Narciss, &

Dūnnebier, 2010); medicine and biology (Cvetkovic, 2013); chemistry (Bodner, Metz, &

Casey, 2014; Cardellini, 2014); cooperative learning (Felder & Brent, 2000); inter- professional collaborative practice (IPEC, 2011); active learning (Felder & Brent, 2003,

2007); team tracking (Doerry & Palmer, 2011); argument, assumption, and evidence

(Barr, Koppel, Reeves, Hammick, & Freeth, 2008); and inter-professional care (Barr,

1998). Studies of participatory approaches are associated with framing participatory evaluation (Connors & Magilvy, 2011; Cousins & Whitmore, 1998), participatory action 24 research (Hughes, 2003), participatory culture (Jenkins, Purushotma, Weigel, Clinton, &

Robison, 2009), participatory observation (De Vries, 2005), participatory learning theories (Hedges & Cullen, 2012), participatory education evaluation (Pietiläinen, 2012), participatory model mental health programming (Nastasi, & Varjas, 1998), participatory evaluation and process use (Jacob, Ouvrard, & Bérlanger, 2011), participatory evaluation strategy (Laudon, 2010); participatory evaluation (Daigneault & Jacob, 2009). In addition, literature on participatory approaches come largely from contexts of evaluation

(Cousins & Whitmore, 1998; Daigneault & Jacob, 2009; Daigneault, Jacob, & Tremblay,

2012), educational evaluation (Pietiläinen, 2012), early child development (Hedges &

Cullen, 2012), evaluation and planning (Connors & Magilvy, 2011; Jacob, Ouvrard, &

Bélanger, 2011), and psychology (Nastasi & Varjas, 1998).

Other researchers also suggest teaching and learning strategies be adopted such as: working in teams (DiGiovanni & McCarthy, in press; Schank, 1993; Trilling & Fadel,

2009), learning-by-doing (Hsu & Moore, 2011; Schank, Berman, & Macpherson, 1999;

Schanks, Fano, Bell, & Jona, 1993), competency-based learning (Barr, 2009; Hundert et al., 1996; IPEC, 2011; Prideaux, 2009; Smith, 2009), collaborative learning (Barr, 1998;

Cardellini, 2014; Felder & Brent, 2004), and cooperative learning (Bodner, Metz, &

Casey, 2014; Cardellini, 2014; Chiu, 2004; Felder & Brent, 2003, 2004, 2007; Oakley et al., 2004; Slavin, 1990). Another observation made is that students enter medicine with narrow focus on promoting quality health care (Locatis, 2007). Other researchers suggest assessment strategies such as peer assessment (Asgari & Dall’Alba, 2011; Hughes, 2011;

Lutze-Mann, 2014; Strijbos et al., 2010; Topping, 1998), sequencing instruction for deep 25 understanding (Moore, 2004), learning styles (Curry, 1990; Ducette, Sewell, & Shapiro,

1996; Knowles, Holton, & Swanson, 2012; Pashler, McDaniel, Rohrer, & Bjork, 2009;

Robertson, Smellie, Wilson, & Cox, 2011; Weimer, 2014), self-efficacy (Bandura 1977,

1982, 1986; Bandura & Cervone 1983; Bandura & Locke 2003; Ford, 1992), and achievement and instruction relations (Bloom, 1976; Dahllöf, 1971; Good, Biddle, &

Brophy, 1975; Gropper, 1968, 1983a; Sarka & Chassiakos, 2010).

Hundert et al. (1996) claim that “medical education programs are producing physicians who do not meet the ethical standards the profession has traditionally expected its members to meet;” and that “the declines in civic responsibility and good manners throughout the United States, fall outside the scope of academic medicine” (p. 624).

Hundert et al. (1996) recommend that increased national attention be paid to improving the educational environment for graduate medical education and local action is needed to humanize the institutional settings in which residents and students learn and teach (p.

632). The current assessment formats for health professions education only test core knowledge and basic skills (Epstein & Hundert, 2002). Epstein and Hundert observed that the assessment format may exclude professional health practice domains.

According to Smith (2009), in the traditional education system, faculties focus more on what they teach rather than what the learners learn; planning is forwards (faculty defines the knowledge fundamentals, teaches that knowledge, then tests whether learners have learned that knowledge, and then hopes for the best). However, in outcome-based education, faculties emphasize what they expect learners will achieve when they complete their course. Smith notes that the traditional medical education uses Flexner’s 26 model (i.e., planning forward that consists of define, teach, test, and hope for the best)

(Duderstadt, 2007), whereas outcome-based learning uses competency-based approach.

Greiner and Knebel (2003) note that a “competency-based approach to education could result in better quality because educators would begin to have information on outcomes, which could ultimately lead to better patient care” (p. 5).

According to Barr (2009), most programs are looking for united professions for shared endeavors to improve services and quality of patient care. These programs emphasize inter-professional relationships; and moving away from a culture of blame to analysis of failure. The programs recognize the professions’ expected responsibilities to work in teams to improve the health and wellbeing for the society.

According to IPEC (2011), competency-based learning has basic core competencies across the professions. The committee maintains that competency-based learning depends on desired principles. These principles include a) “patient and family centered”; b) “community and population oriented”; c) “relationship focused”; d)

“process oriented”; e) “linked to learning activities, educational strategies, and behavioral assessment that are developmentally appropriate for the learner”; f) “able to be integrated across the learning continuum”; g) “sensitive to the systems context across practice settings”; h) “applicable across professions”; i) “stated in language common and meaningful across the professions”; and j) “outcome driven” (IPEC, 2011, p. 2). Leach

(2000) reports that “general competencies offer both depth and breadth, each of the general competencies offers a spectrum from novice to master, competencies add a new dimension, and competencies have a performance excellence model” (pp. 488-489). 27

Smith (2009) notes, outcome-based education define what people expect the learners to achieve. The expectation is that the learners should be knowledgeable and become expert health professionals (Smith, 2009). Smith describes outcome-based learning as planning backwards, starting with the good health professionals and working backwards. Smith (2009) also notes, “an outcome-based curriculum rests on sound, practical, time-tested principles of good education,” and “the teaching aims at helping learners to learn” (p. 167).

Felder and Brent (2000) describe the usefulness of cooperative learning in education. These include a) “active and interactive learning”; b) “individuals students become confused and give up, but groups keep going”; c) “students see and learn alternative problem-solving strategies”; d) “students work harder knowing others are counting on them”; and e) “students, like professors, learn best what they teach” (Felder

& Brent, 2000, p. 23). Slavin (1990) notes that cooperative learning methods can be instructionally effective means of increasing students’ achievement when they use “group goals and individual accountability” (p. 32); “cooperative learning can be an effective form of classroom organization for accelerating student achievement” (p. 33); and cooperative learning strategies improve intergroup relations. Slavin (1990) found that

“About two-thirds of the time, there is a significant difference between the experimental and the control groups in favor of the experimental groups” (p. 53). Bodner et al. (2014) note that “cooperative learning may improve learner achievement, enhance learners’ self- esteem, increase the use of higher-order cognitive skills, improve both cross-sex and cross-ethnic relationships, and reduce science and math anxiety” (p. 142). 28

Sarka and Chassiakos (2010) assert that health professions education curricula frequently fail to expose the health professions students to public health principles, business philosophies and ethics, and team-building and functioning skills (p. 303).

Sarka and Chassiakos (2010) note that the IOM core competencies are foreign to most health professions students’ understanding and vernacular and collaboration becomes more difficult if the collaborators are not sharing the same expertise and language. Sarka and Chassiakos (2010) provide some barriers to effective “collaboration in an inter- professional group that includes expertise, practice style, language, and generational difference” (p. 309), and healthcare informatics, quality measurements and evidence- based practice may be foreign concepts depending on generational differences.

Bloom (1976) notes that “quality of instruction at any given time period also determines much of the future history of the learner within the schools and in the post- school years” (p. 136). Duncan (1969) notes “occupational achievement depends on schooling; and schooling transmits the influence of background on achievement” (p. 76).

According to Bloom (1976), the previous history of the learner does set some limits on what he can learn in a particular set of learning tasks. However, Bloom believes that when optimal qualities of instruction have been provided individual learners will learn to their full limits. Bloom argues that it is extremely rare in schools throughout the world that an individual learner is provided with such optimal qualities of instruction. Bloom notes that group instruction may approach optimal qualities of instruction for only a small proportion of students in a given class. Bloom believes that, “it should be possible to increase greatly the proportion of students who can be provided with optimal qualities of 29 instruction if group instruction can make use of a feedback/corrective system which constantly corrects the learning errors under group instruction” (p. 136).

The earliest research that investigated the relationship between achievement and instruction was Ahlström (1963; as cited in Dahllöf, 1971). According to Dahllöf (1971),

Ahlström found a general trend through the different subtests in a test with very few of them reach statistical significance (pp. 4-6). Husén and Boalt (1968) argued, “What price has to be paid for the high standard achieved by a few?” (as cited in Dahllöf, 1971, p. 6).

Dahllöf (1971) remarked that if this is true, then there is no cost at all to be paid when moving from a selective system to a comprehensive one; or “from a theoretical point of view and to a practical application” (p. 6). Dahllöf notes that the outcome of instruction in the different systems is measured in very elementary functions of achievement (p. 18), the efforts behind the outcomes are of great interest, and the process variables must be considered (p. 19). Ginns, Heirdsfield, Atweh, and Watters (2001) found that beginning teachers who were trained in the participatory approach changed and benefited greatly; however, those who were trained in the traditional direct approach tended “to reproduce the profession, rather than use critical reflection that can lead to change, progress and reflection on practice” (p. 129).

Despite all these concerns and strategies in place, recent reports show more than

500 patients per day died in hospitals alone through medical errors, accidents, and infections (Binder, 2013). Barr (2009) reports that much that “a teacher brings from health professions education will be readily transferable to inter-professional education, but teaching a class drawn from a range of professions is challenging” (p. 191). Barr 30 notes that the assumptions, perceptions, expectations, and experiences are different from each profession. Barr stresses that tension may be disturbing, but on reflection it may be understood as opportunities, and co-teaching may be encouraging. Prideaux (2009) suggests, “Constructivist and authentic learning may promote integration” (p. 186).

Sarka and Chassiakos (2010) suggest health professionals should heed their title and teach their colleagues, patients, and students to be leaders and contributors to the 21st century global health. There is, therefore, a need to train our health sciences and professions students in the IOM standards using a participatory instruction approach in the classroom teaching and learning so that they are fully prepared to practice effective participative and collaborative problem solving and decision making in quality health care delivery. The participatory paradigm is one of the teaching and learning approaches that emphasize integration of component skills, collaboration, political reality, negotiation, and change oriented (Creswell & Plano Clark, 2011, p. 42).

Statement of the Problem

The problem for this study is to compare the perceived achievement and self- concept scores in IOM standards of Health Sciences and Professions students in a group module project using participatory and direct instructional methods. Furthermore, the problem to be investigated in this study is to investigate the effectiveness of participatory instruction on final perceived achievement scores of college of Health Sciences and

Professions (HSP) students in the HSP 4510/5510 course. In addition, the problem is to examine the self-concepts of some selected HSP students on the IOM standards for more depth explanation of quantitative results. 31

Purpose of the Study

The purpose of this quasi-experimental-participatory, evaluation study was to examine the effectiveness of participatory instruction and direct instruction on the level of perceived achievement of HSP students; to compare the difference in means and gains on the HSP students’ change in overall perceived achievement scores and change perceived self-concept scores on a group module project after a treatment; to determine whether students’ overall final perceived achievement scores on standards increased or decreased as a result of the instruction, while controlling for overall initial perceived achievement scores, and then following up with purposefully selected typical cases to explore those results on self- concepts in more depth.

Significance of the Study

The findings of this study will help health educators, health professionals, health professions students, stakeholders in health education, and health policymakers to incorporate participatory instruction into the health professions education curriculum in the United States of America. In addition, the findings will inform the recruitment agencies of the importance of participatory instruction and the use of it in their training programs. The findings will encourage the organizations to use participatory instruction as job fit for their work. The findings will serve as resource materials for researchers and health professions students. The findings will help health professions instructors to modify their classroom instruction towards participatory learning instruction. This study will be an indicator for health curriculum planners to modify their present scope of IOM standards. 32

Research Questions

According to Good, Biddle, and Brophy (1975), instruction appears to be an important variable in students’ achievement gains. Gropper (1983), Dahllöf (1971), and

Bloom (1976) theorize that achievement is a function of instruction. Moore (2004) notes that designing instructional sequences helps students gain deep understanding; and mastery goals lead to positive processes and outcomes (Elliot & Thrash, 2001). The question is: Do students’ final perceived achievement scores on standards increase as a result of instruction, after controlling for their initial perceived achievement scores?

Specifically, the research questions for this study included the following:

Research question 1. Do Health Science and Professions (HSP) students who are taught using a participatory instruction have greater gain on change in overall perceived achievement scores than the HSP students who are taught using the direct instruction?

Research question 2. How do HSP students feel about team preference on a group module project with regard to participatory and direct instructional types?

Research question 3. How does the participatory instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores?

Research question 4. How does the direct instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores? 33

Research question 5. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Research question 6. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Research question 7. How does a participatory instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their

IP teams, controlling for their initial perceived achievement scores?

Research question 8. How does the direct instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores?

Research question 9. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Research question 10. How do the HSP students’ journal reflections help explain their self-concept on standards of a group module project? 34

Hypotheses

Research questions and hypotheses can guide the researcher in shifting through a mass of data. The hypothesis should be stated as a suggested solution to a problem or as the relationship of specified variables (Mauch & Birch, 1989). According to the research questions stated, hypotheses in this study are listed as follows:

Hypothesis 1. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students taught without using the participatory instructions.

(퐻0: 휇푔푝퐴푐ℎ−푝푎푟푡 − 휇푔푝퐴푐ℎ−푑푖푟푒푐푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for HSP students taught without using participatory instruction.

(퐻푎: 휇푔푝퐴푐ℎ−푝푎푟푡 > 휇푔푝퐴푐ℎ−푑푖푟푒푐푡).

Hypothesis 2. Hypothesis 2 comprised of four sub-hypotheses. Hypothesis 2a addressed the gains on students’ working in teams change in overall perceived achievement scores in both participatory and direct groups. Hypothesis 2b looked at the gains on students’ working alone change in overall perceived achievement scores in both participatory and direct groups. Hypothesis 2c addressed the gains on students’ change in overall perceived achievement scores in the participatory group working in teams and 35 working alone. Hypothesis 2d tested the gains on the students’ change in overall perceived achievement scores in the direct group working in teams and working alone.

Hypothesis 2a. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using the participatory instructions. (퐻01 : 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 −

휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction. (퐻푎1: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 >

휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡).

Hypothesis 2b. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction. (퐻02: 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 =

0)

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working alone taught using 36 participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using participatory instruction.

(퐻푎2: 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡).

Hypothesis 2c. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction. (퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone taught using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡).

Hypothesis 2d. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction.

(퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without 37 using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 >

휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡).

Hypothesis 3. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction and the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various majors taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡).

Hypothesis 4. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a direct instruction and the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 −

휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0) 38

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various majors taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡).

Hypothesis 5. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The final perceived achievement score for group of

HSP students from various majors taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Hypothesis 6. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The final perceived achievement score for group of

HSP students from various team preferences taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement 39 scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Hypothesis 7. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡).

Hypothesis 8. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 −

휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP 40 students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡).

Hypothesis 9. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The final perceived achievement score for group of

HSP students from various IP teams taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Limitations of the Study

A limitation is a factor that may affect the outcome of the study, but is not under control of the researcher (Mauch & Park, 2003). This study is a quasi-experimental study using- a nonequivalent control group design (intact classes). It is limited to participatory instruction, cognitivist and social constructivist approaches to teaching and learning, the context is health professions education, the task is group module project, and most common clinical cases. However, the teachers have been using traditional methods

(direct instructions) that establish a direction and rationale for learning. This relates new concepts to previous learning. The teachers also lead students through sequential instructions based on predetermined steps. The teachers introduce and reinforce a 41 concept. The teachers provide students with practice and feedbacks relative to how well they are doing.

This study provides information about the effectiveness of using participatory instruction for teaching IOM standards in a group module project. Instructional materials utilized in the study include hands off video, previous iBook module projects, previous module website, and previous interactive PDF. A broad generalization about the use of participatory instruction for teaching other competencies is not possible because of the particular population used. Undergraduate and graduate HSP students will be involved in the study even though IOM standards are offered to HSP students from Health

Professional Education.

Another limitation is the involvement of the control and experimental HSP students from the different cohort groups. Students in the experimental group were taught for 14 weeks for fall and spring but seven weeks for summer they took pre-survey at the first meeting of each semester and the post-survey at the last week of each of the semesters. Students in the control group were not taught participatory instruction but received a lecture for the first two weeks of the beginning of each semester; they took pre-survey at the first meeting and post-survey at the second week of each of the three semesters. The direct instruction lectures have many design features that may be idiosyncratic to the current implementation (participatory instruction strategy). The use of direct instruction as a control is only meant to indicate the general effectiveness of its approach. 42

Delimitations of the Study

Delimitations are about the scope of the study. It should tell the reader the elements of the study and the reason why they are included and excluded (Mauch & Park,

2003). The following points are addressed in this study:

a) The participants of this study are undergraduate and graduate health science professions students.

b) The sample of students is a convenience sample of six intact classes of

2013/2014 academic year and 2014/2015 academic year.

c) The study was limited to the fall, spring, and summer semesters.

d) HSP students who enrolled in the HSP 4510/5510 courses from 2013 to 2015 academic years.

e) Independent variables are two instructional conditions (participatory instruction, direct instruction); five instruction units (Rp =providing patient-centered care, Rtw = working as part of interdisciplinary teams, Re = employing evidence-based medicine, Rq = applying quality improvement, and Rinf = utilizing information technology), six categories of majors (BSN, MED, PT, SLP, SW, NUT); and the dependent variables are knowledge achievement scores, scores on the pre-survey and post-survey).

A 5-item pre-survey about IOM standards was used to determine students’ prior knowledge. The five items on IOM standards were closed ended questions, and quantitatively rated on 7-point knowledge scale ranged from “No knowledge” (1) to

“Expert” (7) with no additional labels marked. The survey also had five items that were 43 open-ended questions, and request the students to provide comments on their quantitative perceived knowledge rated. Meyers, Gamst, and Guarino (2013) noted that “extreme values could distort the results of a statistical analysis” (p. 37). In this sampling, students who had extreme values of 1 or 2 (No knowledge) and 6 or 7 (expert) on each IOM standard, and then satisfied 3 out of 5 IOM standards were to be selected for the second phase of data analysis. The rationale for doing this was not to lose these important participants since they might be eliminated through quantitative data analysis. The comments on the post-survey and weekly journals of these students were to be retrieved from the existing database for qualitative data analysis. However, in the main study, the criteria 3 out 5 IOM standards failed to provide a case for a direct instruction group, so criteria of ±3 standard deviations was used instead. The failure of 3 out of 5 IOM standards criteria might be due to the deletion of the three outlier cases from the sample through case-wise diagnostic data screening for statistical assumptions to hold.

Definition of Terms

In this research, the five competencies areas that are covered include a) “patient- centered care”, b) “interdisciplinary team”, c) “evidence-based practice”, d) “quality improvement” approaches, and e) “informatics” (Greiner & Knebel, 2003, p. 1). With the ideal 21st-century health care system, the health professions education is to meet patients’ needs. According to Hundert et al. (1996; as cited in Greiner & Knebel, 2003), the core competencies for health professions involves the “use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice” (pp. 3-4). Greiner and Knebel (2003) described the five IOM core 44 competencies. These competencies skills include providing “patient-centered care”; working “in interdisciplinary teams”, applying “quality improvement”, employing

“evidence-based practice”, and “utilizing informatics” (Greiner & Knebel, 2003, p. 4).

Provide patient-centered care. It refers to “identify, respect, and care about patients’ differences, values, preferences, and expressed needs; relieve pain and suffering; coordinate continuous care; listen to, clearly inform, communicate with, and educate patients; share decision making and management; continuously advocate for disease prevention, wellness, and promotion of healthy lifestyles, including a focus on population health” (Greiner & Knebel, 2003, p. 45).

Work in interdisciplinary teams. It refers to “cooperate, collaborate, communicate, and integrate care in teams to ensure that care is continuous and reliable”

(Greiner & Knebel, 2003, p. 45).

Employ evidence-based practice. It refers to “integrate best research with clinical expertise and patient values for optimum care, and participate in learning and research activities to the extent feasible” (Greiner & Knebel, 2003, pp. 45-46).

Apply quality improvement. It refers to “identify errors and hazards in care; understand and implement basic safety design principles, such as standardization and simplification”; “continually understand and measure quality of care in terms of structure, process, and outcomes in relation to patient and community needs”; and “design and test interventions to change processes and systems of care, with the objective of improving quality” (Greiner & Knebel, 2003, p. 46). 45

Utilize informatics. It refers to “communicate, manage knowledge, mitigate error, and support decision making using information technology” (Greiner & Knebel,

2003, p. 46).

Achievement. Achievement referred to “to overcome obstacles, to exercise power, to strive to do something difficult as well and as quickly as possible” (Murray,

1963, p. 742). In this study, an achievement refers to a total score of each of the five standard scores.

Perceived achievement score. For the purpose of this study, the perceived achievement score was referred to as the sum of the self-concept rating scores on the standards for an individual (Bandura & Locke, 2003; Elliot & Thrash, 2001). At the beginning of a course some students brought in prior (initial) perceived achievement and at the end of the course these students had final perceived achievement as a measure

(Bloom, 1976).

Initial perceived achievement score. For the purpose of this study, the initial perceived achievement score was referred to as the sum of the self-concept rating scores on the standards for an individual on the pre-survey of IOM self-reported knowledge achievement items (Bloom, 1976; Director, 1974).

Final perceived achievement score: In this study, the researcher referred to final perceived achievement score as the sum of the self-concept scores on the standards for an individual on the post-survey of IOM self-reported knowledge achievement items

(Bloom, 1976; Director, 1974). 46

Change perceived achievement score: In this study, the change perceived achievement score was referred to as the difference between initial perceived achievement score and final perceived achievement score of an individual on the standards (Bloom, 1976; Director, 1974).

Mean perceived achievement score: In this study, the researcher referred to the mean perceived achievement score as the average perceived achievement scores of groups of individuals (Director, 1974).

Gain perceived achievement. In this study, a gain perceived achievement was referred to as the difference between changes perceived achievement scores of two groups of individuals (Director, 1974).

Self-concept score. In this study, a self-concept score was referred to as the individual self-reported knowledge rating score on each standard on the pre-and post- survey (e.g., a student’s self-concept on patient-centered care means a student’s perceived rating score on patient-centered care).

Self-efficacy. Self-efficacy refers to people’s belief about their capability to learn or do something (Bandura, 1993). In this study, the researcher referred to a student’s self-efficacy as a reflection of a student’ prior (initial) perceived achievement score.

Instructional impact. In this study, the researcher referred to an instructional impact as the change score in the paired pre-and post- self-concept rating scores (raw scores) in each standard of an individual over time (Director, 1974; Gropper, 1983a).

Module. A module is referred to “a self-instructional package treating a single topic or unit of a course” (Morrison, Ross, Kemp, & Kalman, 2007, p. 432). In this 47 study, a module is a standard of IOM standards. In all, there are five modules meaning that there are five standards. These standards are patient-centered care, interdisciplinary teamwork, evidence-based practice, quality improvement, and informatics.

Reflexive journal. Reflexive journal included the original notes of the author.

The notes were about important issues of interests, which emanated during the data collection. These issues were the easy ideas about possible assertions, and methodological considerations (Ginns et al., 2001).

Participatory learning and action. Participatory learning and action is referred to as “a growing family of approaches, tools, attitudes and behaviors to enable and empower people to present, share, analyze and enhance their knowledge of life and condition and to plan, act, monitor, evaluate, reflect and scale up community action”

(Appel, Buckingham, Jodoin, & Roth, 2012, p. 5).

Quality. According to Holpp (1993; as cited in Gloe, 1998), quality in health care has several definitions. These definitions are:

a) Quality is continuous process of improvement focusing on incremental change

over a period of time, with the emphasis on continual improvement-improvement

in the way employees perform their jobs to create greater efficiency and cost-

effectiveness, produce greater learning, and apply their skills in health care.

b) Quality is outstanding service. Providing education that important, necessary,

and easily assimilated into day-to-day practice enhances patient care. Such

education will help employees in working smarter rather than harder. 48

c) Quality is cost control and resource utilization. Education departments are

often the first target of cost reductions. By showing money, time, or resource

savings and patient benefit, clinical staff development specialists (SDSs) can

assure themselves a place in the health care organization.

d) Quality is doing the right things the first time. Quality is accomplished by

putting customers first when considering changes and by making decisions based

on data rather than gut feelings. (Gloe, 1998, p. 301).

Gloe (1998) suggests that clinical staff development specialists must strive to monitor and improve the quality of the products and services they offer to move the organization forward; that they must align their programs with the organization’s goals and objectives, creating a positive image for the organization; and that staff development practice must be integrated into the organization (p. 333).

Organization of the Study

This dissertation is organized into five chapters. Chapter 1 focuses on the introduction of the study, providing the background, statement of the problem, purpose of the research, significance of the study, research questions, hypotheses, limitations of the study, delimitations of the study, definition of terms, and organization of the study.

Chapter 2 provides the review of the literature, consisting of the theoretical and philosophical foundations of teaching and learning, achievement, direct instruction, participatory instruction, competency-based learning, collaborative learning, cooperative learning, classroom assessment, group facilitation, and participatory evaluation. Chapter

3 describes the methodology of the study, including research design, population, 49 sampling plan, sample size selection, instrument, data collection and procedure, and data analyses and procedures, and pilot study and results. Chapter 4 presents the results and findings of the data. Chapter 5 discusses the findings, conclusions, and implications of the study and provides recommendation for future studies.

50

Chapter 2: Review of Literature

This chapter reviewed the theoretical and philosophical foundation of teaching and learning, achievement, direct instruction, participatory instruction, competency-based learning, collaborative learning, cooperative learning, classroom assessment, group facilitation, and participatory evaluation.

Theoretical Perspective

This section dwells on ontological literature, epistemological literature, axiological literature, and theory of practice, parameters for the metatheory of instruction, learning, and their concepts; and discusses and compares direct instruction and participatory instruction approaches, roles of instructors and students, and classroom environment. According to Chilisa and Tsheko (2014), the theoretical perspective/conceptual framework is derived from philosophical assumptions about ontology (nature of reality), epistemology (knowledge), methodology, and axiology

(value).

Ontological Literature

According to Chilisa and Tsheko (2014), a relational ontology has four principles.

These principles include accountably responsibility, “respectful representation, reciprocal appropriation, and rights and regulations” (p. 223). According to De Vries (2005), one field in philosophy is ontology. De Vries maintains that ontology deals with being, with what is, and what exists; and ontology asks for the essence of things (p. 3). Creswell and

Plano Clark (2011) note an ontological element of participatory worldview is political 51 reality. Creswell and Plano Clark stress that in the ontological participatory worldview, the findings are negotiated with research participants (p. 42).

Epistemological Literature

Chilisa and Tsheko (2014) relate community as knower and knowledge as the established general beliefs and concepts. The authors report that postcolonial indigenous paradigm informs a relational epistemology. Epistemology values communities as knowers. Epistemology values knowledge as the well-established general beliefs, concepts, and theories of any particular people. These theories are stored in “the language of people, practices, rituals, proverbs revered traditions, myths, and folktales”

(Chilisa & Tsheko, 2014, p. 223). Chilisa and Tsheko note, “Knowing is something that is socially constructed by people who have relationships and connections with each other, the living and the nonliving, and the environment” (p. 223). Chilisa and Tsheko (2014)

“Knowers are seen as beings with connections with other beings, the spirits of the ancestors, and the world around them” (p. 223). The world informs what they know and how they can know it. Chilisa and Tsheko (2014) note that the “challenge is on how to bring this cultural knowledge into the research process” (p. 223). According to De Vries

(2005), a second field in philosophy is epistemology. Epistemology focuses on the nature of knowledge. De Vries asserts that knowledge (a justified true belief) is important in society; that knowledge is not always transferred, but sometimes has to grow in the mind of individuals (philosophy of mind). According to Creswell and Plano Clark

(2011), an epistemological element of participatory worldview is collaboration. Creswell 52 and Plano Clark stress that in the “epistemological participatory worldview, the researchers actively involve participants as collaborators” (p. 42).

Axiological Literature

According to Chilisa and Tsheko (2014), the research process informs a relational ethical framework. The “relational ethical framework moves away from conceiving the researched as participants to seeing them as co-researchers” (p. 223). The axiological element focuses on the cordial relationships between the researcher and the researched

(Chilisa & Tsheko, 2014). Chilisa and Tsheko note, “The researched takes into account the web of relationship with the living and the nonliving” (p. 223). According to

Creswell and Plano Clark (2011), an axiological element of participatory worldview is negotiation. Creswell and Plano Clark report that in the axiological participatory worldview, the “researchers negotiate their biases with participants” (p. 42).

A Theory of Practice

According to Mercer (1995), “a theory of the guided construction of knowledge in schools and other educational settings must” have three essential requirements (p. 66).

These requirements are that a) “it must explain how language is used to create joint knowledge and understanding”; b) it must “explain how people help other people to learn”; and it must “take account of the special nature and purpose of formal education”

(Mercer, 1995, p. 66). Gropper (1983a) notes the effective instruction for any instructional objective is based on several assumptions. These assumptions include the instructional conditions, and the degree of attention for each instructional condition requiring attention. The instructional conditions are the instructional treatments for an 53 instructional objective. These conditions are learning requirements (learning styles, preferences, and strategies) and their obstacles (cognitive process, affective, social). For example, the student recalled information on a delayed basis. Gropper notes that there will be an appropriate prescription of treatments when these instructional conditions are accurately and comprehensively identified. These instructional conditions interact with the effects of the treatment methods. They cannot be manipulated in any given situation.

Merrill (1991; as cited in Wilson, Teslow, & Osman-Jouchoux, 1995) provides the definition of constructivism with respect to instruction, including a) “knowledge is constructed from experience”; b) “learning is a personal interpretation of the world”; c)

“learning is an active process of meaning-making based on experience”; d) “learning is collaborative with meaning negotiated from multiple perspectives”; e) “learning should occur (or be “situated”) in realistic settings”; f) “testing should be integrated with the task, not a separate activity”; g) “reflection is a key component of learning to become an expert”; h) “like instruction, assessment should be based on multiple perspective”; and i)

“students should participate in establishing goals, tasks, and methods of instruction and assessment” (p. 141).

Brunner (1966; as cited in Knowles et al., 2012) provides the “four criteria for a theory of instruction or inquiry of teaching” (p. 96). These criteria include a) “A theory of instruction should specify the experiences that most effectively implant in the individual a predisposition toward learning;” b) “A theory of instruction must specify the ways in which a body of knowledge should be structured so that it can be most readily grasped by the learner;” c) “A theory of instruction should specify the most effective sequences in 54 which to present the materials to be learned;” and d) “A theory of instruction should specify the nature and pacing of rewards and punishments in the process of learning and teaching” (pp. 40-41).

Parameters for the Metatheory of Instruction

According to Gropper (1983a), metatheory has embraced all learning requirements (behavioral, cognitive, or social). It is expressed quantitatively and available to all. It can be used to analyze and compare instructional theories and models.

It can be used to analyze and compare the depth and comprehensiveness of learning requirements and obstacles (conditions). It can be used to predict the quantity of severity of the conditions (the posed learning difficulties). Further, it can be used to outline the criteria employed in defining and quantifying the levels of attention for the given conditions’ attention. Furthermore, it can be employed in outlining the criteria for matching estimated condition severity and attention levels. Finally, and “most important of all, instructional theories and models can be evaluated for their capacity to predict and produce achievement” (p. 43). The parameters of the proposed metatheory include conditions and treatments.

Conditions. Conditions have two postulates (Gropper, 1983a). These postulates are a) for each object to be learned, there is a population of true learning requirements

(behavioral, cognitive, social); and b) for each learning requirement, there is a population of true obstacles (subject-matter, target-audience, and their interaction characteristics) to be met (p. 43). The two parameters are proposed for the characterization of the 55 conditions applicable to an objective: a) the number of conditions associated with an objective; and b) the difficulty level posed by each condition.

Characteristics of Instructional Theories and Models

Gropper (1983a) suggests four characteristics for instructional theories and models. These characteristics are a) “differential analysis of learning requirements”; b) quantification of conditions and treatments; c) “compatibility with a theory of learning”; and d) “linkages” among “learning theory, instructional theory, and an instructional model” (Gropper, 1983a, p. 48). For differential analysis of learning requirements,

Gropper discusses the two taxonomies that are used to analyze objectives. These taxonomies include intact classification of objectives, and dissection of objectives. The intact classification of objectives involves assigning an objective to one of several categories, comprising instructional-design taxonomy (Gropper, 1983a; Gagne & Briggs,

1974). According to Gropper (1983a), categories of intact classification include recalling facts, defining concepts, giving explanations, following rules, and solving problems (p.

49). He reports that intact classification allows an identification of only the common learning requirements. He stresses that analyzing component skills lead to looking at learning requirements and identifying potential obstacles for the learning requirements.

The dissection of objectives involves dissecting and analyzing an objective for its distinctive component parts. This approach is behaviorally represented in the objectives that discriminate, generalize, and associate to make a complete chain (Gropper, 1974,

1983a). Each objective is analyzed for a specific mix of the component skills (Gropper,

1983a). 56

For quantification of conditions and treatments, Gropper (1983a) notes that it is critical to quantify conditions as easy or difficult in an instructional theory or model. He stresses that it is beneficial to identify the conditions, to quantify levels of condition variables accurately, and to characterize the levels of attention that hinder learning an objective. He notes that accurate and relevant matching of condition severity and treatment levels depends on a theory or model.

For compatibility with a theory of learning, Gropper (1983a) disproves the claim that “if the proposed metatheory is on target in its emphasis on learning requirements and obstacles to their met, then compatibility is the wrong word” (p. 50). He stresses that an instructional theory needs to build on a theory of learning. The theory of learning should identify a) the “types of learning requirements that characterize objectives”, b)

“parameters… that affect how easy or difficult it might be to meet them”, and c)

“parameters that characterize conditions under which learning” occurs (Gropper, 1983a, p. 50).

For linkages among learning theory, instructional theory, and an instructional model Gropper (1983a) provides a conceptual map to illustrate the interrelationships.

This concept map shows interrelationship among theory, research, and practice (see

Figure 1). Gropper describes the common vocabulary used in instructional theory and model. These words are descriptive research.

According to Gropper (1983b), instruction concerns with techniques that bring specific changes in behavior. He discusses the following instructional concepts. These concepts include criterion stimulus, cue, incrementing, shaping, and fading (p. 110). He 57 concludes that students’ success on criterion-level performance requires the incremental and gradual shaping of behavior and the decrement and gradual elimination of cues.

Figure 1. Inter-relationships among theory, research, and practice

Source: Adapted from Gropper (1983a, p. 51).

Gropper (1983a) maintains that learning theory, instructional theory, and instructional model share common vocabulary; and that the parameters in a learning theory are the same as in a theory of instruction. He notes that the three differ in goals and rhetoric.

Descriptive and prescriptive research. Gropper reports that a learning theory, an instructional theory, and an instructional model exhibit several goals. These goals are 58 a) internal coherence, b) a prerequisite for demonstrating their external validity, c) internal consistency, and d) upgraded practice (p. 51). He suggests that the goal of upgraded practice depends on the reciprocal goal of upgraded theory, and it benefits analytical and systematical changes in both practice and theory. Gropper notes that a prescriptive instructional theory guides practice and provides prescriptions for applied evaluation. The results of prescriptive instructional theory research provide ways to modify instructional model and instructional prescriptive theory.

According to Gropper (1976), the subject-matter, the target-audience, and the subject-matter and target-audience should be accommodated in prescriptions. Gropper notes that the type of learning involved in component skills may differ from one objective to the other, and the severity of difficulties may vary from audience to audience. For the detailed analysis of conditions, Gropper recommends that learning requirement should be subject-matter dependent, target-audience dependent, and subject matter and target- audience dependent.

According to Gropper (1976), a theory of instruction should be tried out; and should tell us about differences in treatments. Gropper stresses a theory of instruction must offer prescriptions, leading to the design of an appropriate cumulative experience; and should provide prescriptions, generating such cumulative learning experiences.

Gropper notes a theory of instruction should concerned with possible exceptions, and must concern with when feedback or active response may or may not be applied.

Gropper asserts that a theory of instruction must prescribe specific treatments, must 59 identify the nature of students’ differences in learning requirements, and must concern with integrating technology into the teaching and learning (pp. 7-12).

Learning Theory

According to Gropper (1983b), learning theory describes how behavioral changes occur. He reports that the parameters of learning theory include a) a unit of behavior to be analyzed, b) the conditions that produce the changes in it, and c) the nature and permanence of the changes in it that can result (p. 106). He discusses the learning concepts, including a) stimulus-response (S-R) association; b) stimulus control; c) discrimination; d) generalization; e) association; and f) chains. He concludes that a successful performance of any activity may occur if a performer has learned all S-R units in it and has integrated them into a complete chain.

Instructional Concepts

According to Gropper (1983b), instruction concerns with techniques that bring specific changes in behavior. He discusses the following instructional concepts. These concepts include criterion stimulus, cue, incrementing, shaping, and fading (p. 110). He concludes that students’ success on criterion-level performance requires the incremental and gradual shaping of behavior and the decrement and gradual elimination of cues.

Origins of an Instructional Theory and its Later Influence

According to Gropper (1975), several procedures in educational technology have been achieved through ‘tryout and revision’. This tryout and revision procedure has won widespread acceptance. Gropper provides some early history of educational technology.

Gropper reports on two research results that indicate the inability of educational 60 technology to allow for teacher-student interaction. These results include the absence of student participation and the absence of feedback from student to teacher. These results have led to researches on feedback. These researches include a) instrumental feedback to instructors, b) pre-testing of instruction (early feasibility studies), and c) research on the pre-testing approach. Gropper notes that generalizing revision procedures depends on two interdependent activities. These activities are diagnosis and revision. Gropper stresses that revision is the use of developed procedures at a different point in time, and diagnosis must inform the direction of revision and what type of revision is needed.

Gropper further stresses that areas of diagnosis have the biggest challenges. Gropper

(1975) concludes that:

The future of a tryout and revision technology awaits an identification of the types

of errors students commit both on programs and on tests and a parallel

identification of the types of program weaknesses which are responsible for them.

It also awaits the formulation of diagnostic procedures which, by making the

needed identifications, can lead to relevant, reliably implemented revision (p. 9).

According to Gropper (1976), instructional theory arises from current innovative practice, and the search for the appropriate choices has led to selecting appropriate systems, non-systems, and anti-systems. Gropper notes, “What we have here to choose from are systems which stress either design of instruction, development of instruction, delivery of instruction, management of instruction, or evaluation of instruction” (p. 7).

Gropper further notes: 61

Generalizations based on the experience of this array of systems are not likely to

result in a single unified and coherent theory of instruction. Nor if a single,

embracing theory were already available, is it likely that it could prescribe

practice for the design, the development, the delivery, the management, and the

evaluation instruction. Since “all of the above” is not feasible, an aspiring theorist

has to select the options he deems most likely to make an impact (Gropper, 1976,

p. 7).

According to Gropper (1976), the subject-matter, the target-audience, and the subject-matter and target-audience should be accommodated in prescriptions. Gropper notes that the type of learning involved in component skills may differ from one objective to the other, and that the severity of difficulties may vary from audience to audience. For the detailed analysis of conditions, Gropper recommends that learning requirement should be subject-matter dependent, target-audience dependent, and subject matter and target-audience dependent.

In his paper “What should a theory of instruction concern itself with?” Gropper provides a rationale for a paper he delivered in 1976 at American Educational Research

(AERA) on instructional theory. According to Gropper (1976), the rationale for this paper includes the theory of instruction, the type of questions, the detailed analysis of conditions, the treatment types, the author’s personal experience, the characteristics practices of teachers and materials developers, and the kinds of research results. For the theory of instruction, Gropper (1976) asserts that a theory of instruction a) should be tried out; b) should state differences in treatments; c) must offer prescriptions, leading to the 62 design of an appropriate cumulative experience; d) should provide prescriptions, generating such cumulative learning experiences; e) should concern with possible exceptions; f) must concern with when feedback or active response may or may not be applied; g) must prescribe specific treatments; h) must identify the nature of learners’ differences in learning requirements; and i) must concern with integrating technology into the teaching and learning (pp. 7-12).

In an article “Programming visual presentations for procedural learning” Gropper in 1968 reported that any kind of performance, whether procedural or conceptual, must be analytically directed to the learning tasks involved in it (p. 35). Gropper provides

“instructional strategies involving demonstrations for two different types of learning” (p.

35). Gropper notes that procedural learning concerned with singular objects and events, generalization within class is not a concern, and discriminations are a concern. Gropper maintains that learning procedural skills involve acquisition of discriminations, and the acquisition of sequences of chained responses. Gropper suggests that “to teach procedural skills, visual demonstrations must provide the student with an opportunity to acquire the discriminations involved in the identification and selection of parts and acquire and retain the appropriate chains” (Gropper, 1968, p. 35). Gropper conducts a study in the context of practical problems that “concerned with two interrelated research issues”. The issues were “the investigation of the effects on procedural learning of the size of the demonstration unit” variables “and the mode of student practice following the demonstration” variables (Gropper, 1968, p. 37).

The details of Gropper’s (1968) are presented as follows: 63

Method

Materials television tapes. Gropper (1968) uses four television tapes, and “all four” tapes “had review sequences built into them”. All four had “provisions for student practice in recognizing correct selection of parts, correct part locations, and correct assembly sequences”. The tapes “differed only with respect to the point at which, following the demonstration (which included recognition practice), additional student practice occurred” (p. 37).

Size of demonstration unit. Gropper’s (1968) study was concerned with variations in how much of a procedural task could be demonstrated before practice was allowed. The tapes were therefore designed to provide an opportunity for practice at different points in the demonstration (p. 38). The total amount of practice allowed was identical for four tapes. What varied were the time of its occurrence and the size of the practice unit (corresponding to the size of the demonstration unit). The four tapes represented a systematic experimental manipulation of the demonstration unit size.

Gropper provides the rational for the choice of the demonstration units.

Provisions for review and practice. According to Gropper (1968), on each videotape, the review portions followed the demonstration and covered as many of the demonstration units as appeared on that tape. In addition to the review, which covered key steps in the assembly task, each tape had additional recognition practice built into it.

The recognition practice covered only those units just previously demonstrated. This gave the student the opportunity, based on what he/she had learned during the demonstration; to edit or critique the assembly demonstrated (p. 39). 64

Mode of practice. According to Gropper (1968), for the added recognition practice, students follow instructional sequences that include a) “demonstration”, b)

“interspersed recognition practice”, c) “review”, and d) “added recognition practice” (p.

39). For the actual practice, students practice in producing an assembled motor following instructional sequences, including a) “demonstration”, b) “interspersed recognition practice”, c) “review”, and d) “actual assembly practice” (p. 39).

Demonstration content. According to Gropper (1968), the instructional objectives included to a) “familiarize the viewer with the parts to be selected and assembled for a particular unit”; b) “enable the viewer to determine which parts go in which location”; c) “enable the viewer to assemble the parts in an appropriate order or sequence”; and d) “enable the viewer to recognize what a properly assembled unit (or portion of a unit) looks like” (Gropper, 1968, p. 40). The procedures were emphasized at the review and recognition practice to reduce errors. Gropper provides “motor kits (all separate parts)”, “workbooks (contain problem posed for each demonstration)”, and

“checklists (a guide for assembling the motor)” (p. 40).

Design of the Experiment

According to Gropper (1968), there were two experimentally manipulated variables. The first variable was the “size of the demonstration unit” (p. 41). The demonstration unit had four levels. The second variable was the practice mode having two levels (active and recognition). This resulted in eight experimental treatments.

Procedure. According to Gropper (1968), the eight experimental treatments were administered. Students in each of the treatments sat before a television monitor. Three to 65 five students were assigned to each monitor. Students in recognition group follow part of the demonstration. All students watched this part of the demonstration tape. The tape continued to roll. Students also continued to solve problems in their workbooks.

Students in the production group practiced putting together a motor. The tape stopped.

Students moved to their tables. The proctor observed students’ work. Students completed all the practice rounds. The students were asked to assemble another identical motor. In all the groups, students worked at their own paces. They were given sufficient time so as to correct their own errors. In situations where a student could not identify the errors, “the proctor provided cues to the student” (p. 42).

Sample. According to Gropper (1968), the sample size of his study was “89 seven graders”. These students “were assigned at random to one of the eight experimental conditions. For the purposes of analysis, students were identified”. They were “assigned to further subdivisions in the design on the basis of sex and IQ following the demonstration of the experimental treatments” (p. 43).

Results. According to Gropper (1968), there are “two major issues about which data have been collected in this study” (p. 43). These issues include a) “How does the size of the demonstration unit (before practice is allowed) affect practice when it occurs?” b) “What are the effects on criterion performance of two kinds of prior practice: actual practice vs. recognition practice?” (p. 43). Other “two questions raised about the mode of practice” include a) “Does recognition practice added to observation of a demonstration affect the first performance of the task demonstrated?” b) “What is the comparative effect of recognition practice versus actual practice on subsequent criterion 66 performance?” (p. 47). Other questions on procedural learning are a) “Was the programming strategy adopted effective?” b) “Did increasing the size of the demonstration unit reduce its effectiveness?” c) “How does mode of practice affect performance?” Gropper concludes “that actual practice is superior to recognition practice”; that “there are circumstances when recognition practice may be adequate” (p.

55). Gropper notes that “these might include a) prior experience with the motor or procedural elements involved in a task;” b) “when the proportion of motor skill elements in the task is minimal;” or c) “when logistical or cost considerations may preclude the use of actual equipment” (p. 55).

In Gropper’s (1983b) chapter “A behavioral approach to instructional prescription”, the author seeks to review key concepts in behavioral approaches to learning and instruction. Gropper discusses learning concepts, instructional concepts, and instructional models. Gropper further discusses conditions, treatments, and matching treatments and conditions.

According to Gropper (1983a), achievement is a rising function of instruction.

“Achievement should rise as a function of number of conditions treated and a function of the closeness of the match between need and levels of attention delivered” (p. 47); that

“correlations between achievement and instruction” (p. 47) were most significant; that the

“relationship between instruction and achievement should hold” (p. 48). Gropper notes that “when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination” (Gropper, 1983a, p. 48). 67

Learning Principles

Svinicki (1991) provides six principles of cognitive theory and their practical implications for teaching. Here are the principles and their teaching implications. For the first principle, “If information is to be learned, it must first be recognize as important,” “instructors should direct more attention toward what is to be learned that may enhance their learning” (p. 29). For the second principle, “During learning, learners act on information in ways that make it more meaningful,” “both instructors and students should use examples, images, elaborations, and connections to increase the meaningfulness of information” (p. 30). For the third principle, “Learners store information in long-term memory in an organized fashion related to their existing understanding of the world,” “instructors should provide a familiar organizational structure and encourage students to create such structures that enhance students learning”

(p. 31). For the fourth principle, “Learners continually check understanding, which results in refinement and revision of what is retained, instructors should create opportunities for students to check and diagnose the learning situation” (p. 32). For the fifth principle, “Transfer of learning to new contexts is not automatic but results from exposure to multiple applications, instructors should provide opportunities for students during initial learning for later transfer” (p. 33). Finally, for the sixth principle,

“Learning is facilitated when learners are aware of their learning strategies and monitor their use, instructors should help students learn how to translate these strategies into action at appropriate point in their learning” (p 34). According to Svinicki (1991), there are several objectives and instructional methods for teaching the content of learning 68 strategies. Svinicki asserts that students should know a) “what cognitive learning strategies are” (p. 34), b) “how to monitor their own use of learning strategies”, c) “when to use the strategies they have learned” (p. 35), and d) “how to adapt their strategies to new situations” (p. 36).

Curriculum

According to Carl Jung (as cited in Knowles, Holton, & Swanson, 2012), “human consciousness possesses four functions to extract information from experience to achieve internalized understanding, sensation, thought, emotion, and intuition” (p. 45). Carl Jung proposed that these four functions should be used to prepare a balanced personality and curriculum (Knowles et al., 2012, p. 45). Knowles et al. (2012) provide Rogers’s five basic hypotheses for student-centered approach. These five hypotheses are a) “we cannot teach another person directly; we can only facilitate his learning” (p. 48); b) “A person learns significantly only those things that he perceives as being involved in the maintenance of, or enhancement of, the structure of self,” (p. 48); c) “Experience that, if assimilated, would involve a change in the organization of self… and the structure and organization of self-appear to become more rigid under threats and to relax its boundaries when completely free from threat” (pp. 48-49); d) “Experience that is perceived as inconsistent with the self can only be assimilated if the current organization of self is relaxed and expanded to include it” (p. 49); and e) “The educational situation that most effectively promotes significant learning is one in which (1)threat to self of the learner is reduced to a minimum, and (2) differentiated perception of the field is facilitated” (p. 49). 69

Knowles et al. (2012) provided the description of the pedagogical model (teacher- directed education) to include the teacher being assigned to “full responsibility for making all decisions about what will be learned, how it will be learned, when it will be learned, and if it has been learned” (p. 60). Knowles et al. provided six basic assumptions about learners, including a) “Learners only need to know that they must learn what the teacher teaches if they want to pass and get promoted; they do not need to know how what they learn will apply to their lives” (p. 60), b) “The teacher’s concept of the leaner is that of a dependent personality; therefore, the learner’s self-concept eventually becomes that of a dependent personality” (p. 61), c) “The learner’s experience is of little worth as a resource for learning; the experience that counts is that of the teacher, the textbook writer, and the audiovisual aids producer”, d) “Learners become ready to learn what the teacher tells them they must learn if they want to pass and get promoted”, e) “Learners have a subject-centered orientation to learning; they see learning as acquiring subject matter content”, and f) “Learners are motivated to learn by external motivators” (p. 62).

Knowles et al. (2012) provided the assumptions of the andragogical model, including a) “adults need to know why they need to learn something before undertaking to learn it” (p. 63), b) “adults have a self-concept of being responsible for their own decisions, for their own lives” (p. 63), c) “adults come into an educational activity with both a greater volume and a different quality of experience from that of youths” (p. 64), d) “Adult learners become ready to learn those things they need to know and be able to do in order to cope effectively with their real life situations” (p. 65), e) “Adults are life- 70 centered (or task-centered or problem-centered) in their orientation to learning” (p. 66), and f) “Adults are responsive to some external motivators (better jobs, promotions, and higher salaries), but the most potent motivators are internal pressures (the desire for increased job satisfaction, self-esteem, and quality of life)” (p. 67).

Knowles et al. (2012) stressed that “Any group of adults will be more heterogeneous in terms of background, learning style, motivation, needs, interests, and goals”; thus “greater emphasis is placed on individualization of teaching and learning strategies” (p. 64). Knowles et al. provide some experiential techniques, including “group discussions, simulation exercises, problem solving activities, case methods, and laboratory methods”, “peer-helping activities” (p. 64). Knowles et al. stress that accumulated experience “tend to develop mental habits, biases, and presuppositions causing closure to the mind for new ideas, fresh perceptions, and alternative ways of thinking” (p. 65). Knowles et al. note that “To children, experience is something that happens to them; to adults, experience is who they are” (p. 65). The implication is that

“in any situation in which the participants’ experiences are ignored or devalued, adults will perceive this as rejecting not only their experience, but rejecting themselves as persons” (p. 65). Knowles et al. provide ways that readiness can be induced, exposing

“to models of superior performance, career counseling, and simulation exercises” (p. 66).

The implication for readiness is “the importance of timing learning experiences to coincide with those developmental tasks” (p. 65). 71

Constructivist Instructional Principles

Savery and Duffy (1996; as cited in Knowles et al., 2012) provide eight constructivist instructional principles for different approach to learning. These approaches include a) “Anchor all learning activities to a larger task or problem”; b)

“Support the learner in developing ownership for the overall problem or task”; c) “Design an authentic task”; d) “Design the task and the learning environment to reflect the complexity of the environment in which learners should be able to function at the end of learning”; e) “Give the learner ownership of the process used to develop a situation”; f)

“Design the learning environment to support and challenge the learner’s thinking”; g)

“Encourage testing ideas against alternative views and alternative contexts”; and h)

“Provide opportunity for and support reflection on both the content learned and the learning process” (p. 191). Constructivist and authentic learning may promote integration (Prideaux, 2009, p. 186).

Cognitive Entry Behaviors as Causal Variables

According to Bloom (1976), education and learning in the schools are built on sets of cognitive entry behaviors (prerequisite learnings). Bloom (1976) provides the basis of this assumption, including a) particular tests of achievement and/or aptitude given prior to learning or a set of learning tasks enable one to predict to some extent the level or rate of achievement of students by the end of the task, course, or set of learning tasks; b) the achievement variation of students at the end of the year or term is highly related to their variation in achievement over related school subjects prior to the beginning of the year or term; c) the students have had certain prior learning; and d) 72 almost every learning task has a base in some prior learning (pp. 32-33). Bloom distinguishes between micro-studies and macro-studies. Bloom notes that micro-study focuses on cognitive entry behaviors for particular learning tasks within a set or series of learning tasks; whereas macro-study focuses on “a course, term, or year of instruction in a subject, the achievement at the end of the course or term of instruction, and certain measures available prior to the beginning of the course” (p. 38). Bloom discusses some macro-level studies of cognitive entry behaviors, determines the relations between cognitive entry behaviors prior to a course or term of instruction and achievement at the end of the instruction (p. 39).

Bloom (1976) uses longitudinal studies of achievement where there is a measure of achievement, followed by a period of learning, followed by another measure of achievement to determine the predictability of achievement at grade 12 from earlier measures of achievement. Bloom found that the estimated correlation between general measures of achievement at grade 2 and grade 12 is +.60, between achievement at grades

10 and 12 it is +.90 (p. 39). Bloom found that almost three-fourths of the variation in achievement at the end of a course is predictable from the measure of achievement or pretest before the course started. Bloom believes, such prior measures of achievement include the effects of cognitive entry behaviors (prerequisite content plus prerequisite general skills), affective entry characteristics, and the overlap between the two measures of achievement. In the case of the overlap, Bloom notes that some students at the beginning of the course already had to a considerable degree attained some of the characteristics measured by the final measure. Furthermore, Bloom states that the same 73 or similar achievement tests were administered at both times, so that the later achievement measure includes what the student had already attained on the prior measure plus the change that took place during the single academic year. Bloom concludes that these results are overestimates of the effects of cognitive entry behaviors on subsequent learning (pp. 41-43).

Learning Styles

Several studies provide details about learning styles (Curry, 1990; Ducette et al.,

1996; Knowles et al., 2012; Pashler et al., 2009; Robertson et al., 2011; Weimer, 2014).

Learning styles definition. Learning style referred to as “the concept that individuals differ in regard to what mode of instruction or study is most effective for them” (Pashler et al., 2009, p. 105). Pashler et al. (2009) defined learning style as “the view that different people learn information in different ways” (p. 106).

Issues and controversies of learning styles. According to Curry (1990), the basic idea that individuals differ and that instruction should be modified to take these individual differences into account is not a learning style problem. Curry provides three general problems of learning styles. These problems are “confusion in definitions, weakness in reliability and validity of instruments, and identification of relevant characteristics in learners and settings” (Curry, 1990, p. 50).

Proponents of learning style assessment, according to Pashler et al. (2009), contended that optimal instruction required “diagnosing individuals’ learning style and tailoring instruction” to it (p. 105). Pashler et al. concluded that “any credible validation of learning-styles-based instruction requires robust documentation of a very particular 74 type of experimental finding with several necessary criteria” (p. 105). These criteria included a) “students must be divided into groups on the basis of their learning styles, and then students from each group must be randomly assigned to receive one of multiple instructional methods”; b) “students must then sit for a final test that is the same for all students”; and c) “in order to demonstrate that optimal learning required that students received instruction tailored to their putative learning style, the experiment must reveal a specific type of interaction between learning style and instructional method” (Pashler et al., 2009, p. 105). Pashler et al. asserted that the instructional method that proved “most effective for students with one learning style is not the most effective method for students with a different learning style” (p. 105). Pashler et al. noted that “children and adults would express preferences about how they preferred information to be presented to them when they were asked”; that “people differed in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information”; that “there was no adequate evidence base to justify incorporating learning-styles assessments into general educational practice” (p. 105); that “assessing a student’s learning style would be helpful in providing effective instruction for that student” (p. 108).

According to Weimer (2014), learning style is a dichotomous perspective. It makes complicated things easy. Weimer proposed that students take distinctly different approaches to learning. The students’ styles can be detected with easily administered instruments. Weimer reported about some doubts of learning styles. These doubts include a) the instruments that detect, name, and classify the various approaches to 75 learning; b) how can there be two or four styles; and c) how can every learner fit exactly into one of the styles. Weimer presented an unarguable fact that people do not all learn in the same way. According to Robertson et al. (2011), the key messages for learning styles include a) “Personal awareness of learning styles and confidence in communicating this are first steps to achieving an optimal learning environment”; and b) “A conversation about learning styles between fieldwork supervisor and student enhances the fieldwork experience” (p. 39).

Learning styles implications for education. According to Ducette et al. (1996), studies reveal that teacher educators should understand the rationale underlying the sensitive and appropriate call for environment and for individual learners. Ducette et al.

(1996) note that instruction at all level should use varied formats and modalities and should in general match instruction to a student’s strengths and preferences. Ducette et al. stress that “learning style theories have had a positive effect”; that “people learn differently”; that “some learners have strengths in areas that are unique to them and that make them different from other learners” (Ducette et al., 1996, p. 336).

Assumptions of learning style. Ducette et al. (1996) provide some assumptions of learning style. These assumptions include a) students enter a learning situation with a variety of skills, preferences, and capacities, b) the skills, preferences, and capacities affect their learning, c) a matching learner and learning environment facilitates learning for the learner (Ducette et al., 1996). Ducette et al. suggests that “another learner with different strengths and different preferences will do better in a different environment” (p.

331). Curry (1990) notes that explicit attention to learning styles will improve the 76 educational process, including curriculum design, instructional methods, assessment methods, and student guidance.

Learning styles pros and cons. According to Knowles et al. (2012), “learning styles have great face validity for learning professionals; and various dimensions of learning styles improve learning situations and reach more learners” (p. 211). However,

“there is no unifying theory or generally accepted approach to learning style research and practice; learning styles fail to separate the validity of learning-style theory and constructs from the measurement issues” (Knowles et al., 2012, p. 213). From research studies, Knowles et al. (2012) provide the best uses of learning style instruments, namely, a) creating “awareness among learning leaders and learners that individuals have different preferences”, b) “as starting points for learners to explore their preferences”, and c) “as catalysts for discussion between leaders and learners about the best learning strategies” (pp. 213-214).

Knowles et al. (2012) note that “individuals vary in their approaches, strategies, and preferences during learning activities; that those differences significantly improve learning; that understanding individual differences helps make andragogy more effective in practice” (p. 214). Knowles et al. provide effective ways adult learning professionals use their learning experiences. These ways include a) tailoring “the manner in which they apply the core principles to fit adult learners’ cognitive abilities and learning-style preferences”; b) using “their understanding of individual differences to know which of the core principles are applicable to a specific group of learners”; and c) effectively using 77

“their understanding of individual differences to expand the goals of learning experiences” (p. 214).

Self-Efficacy

Several studies provide detailed about self-efficacy (Bandura 1977, 1982, 1986;

1993, 1997; Bandura & Cervone, 1983; Bandura & Locke, 2003; Ford, 1992, Reeve,

2005). Self-efficacy refers to “Students’ beliefs in their efficacy to regulate their own learning and to master academic activities determine their aspirations, level of motivation, and academic accomplishments” (Bandura, 1993, 117). Based on social cognitive theory, Bandura (1986) defines self-efficacy as a belief that one can execute a particular behavior. A particular behavior may be an outcome. An outcome may also be an achievement or competence.

According to Ford (1992), achievement/competence is a function of motivation and skill and responsive environment (p. 123). Ford defines collective efficacy as

“shared beliefs about the capabilities of a group, organization, or nation for effective action” (p. 123). Ford defines capability beliefs as “evaluations of whether one has the personal skills needed to function effectively” (p. 123). Ford asserts that these beliefs may pertain to a diversity of instrumental capabilities. Ford defines context beliefs as evaluations of whether one has the responsive environment needed to support “effective functioning” (p. 131). Ford notes that these evaluations may pertain to the goals afforded by a particular context, the “goodness of fit” with one’s capabilities, the material and informational resources available in that context, or the social-emotional climate 78 provided by that context. Ford defines achievement as the attainment of a personally or socially valued goal in a particular context (Ford, 1992).

Bandura (1977) hypothesized “that expectations of personal efficacy determine whether coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences” (p. 191).

Bandura (1977) found that perceived self-efficacy related to behavioral changes. A choice of activities and settings increases perceived self-efficacy and then affects coping efforts (Bandura, 1977). The stronger the perceived self-efficacy is the more active the efforts. Bandura stressed that capability and expectation produce desired performance.

Bandura asserts that “self-efficacy proved to be an accurate predictor of performance in the enactive mode of treatment because subjects were simply judging their future performance from their past behavior” (p. 211).

Bandura (1982) notes that “self-percepts of efficacy influence thought patterns, actions, and emotional arousal” (p. 122). Bandura stresses that “in causal tests the higher the level of induced self-efficacy, the higher the performance accomplishments and the lower the emotional arousal” (p. 122). Bandura notes that “level of perceived self- efficacy correlates positively with range of career options seriously considered and the degree of interest shown in them” (p. 136). “Experiences that increase coping efficacy can diminish fear arousal and increase commerce with what was previously dreaded and avoided” (p. 136). Bandura (1982) notes, “knowledge of personal efficacy is not unrelated to perceived group efficacy,” and “that collective efficacy is rooted in self- efficacy” (p. 142). Bandura and Cervone (1983) found that “the higher the self- 79 dissatisfaction with a substandard performance and the stronger the perceived self- efficacy for goal attainment, the greater was the subsequent intensification of effort” (p.

1017). On the basis of self-percepts of efficacy, people make their own choice of activities, determine their capabilities, and determine their perseverance (Bandura, 1977).

Bandura and Cervone (1983) note, “Those who have a low sense of self-efficacy may be easily discouraged by failure; whereas those who are assured of their capabilities for goal attainment intensify their efforts when their performances fall short and persist until they succeed” (p. 1018). Bandura (1982) asserts that behavior could predict self- efficacy and outcome beliefs. Bandura (1977) on self-efficacy “regarding fixed or malleable abilities similarly suggests that individual differences at one point in time can lead to choices that effectively change the actor’s life-space and magnify the impact of those personal characteristics” (p. 25).

It has been established that covariation and a theory are related. According to

Bandura (1982), “covariation increases confidence in a theory”, but it does not “establish firmly its validity because the covariation can be mediated through other mechanisms capable of producing similar effects” (p. 123). Bandura explains that controllability uses covariation “because the relationship between actual and self-perceived coping efficacy is far from perfect” (p. 136).

Bandura and Locke (2003) assert that “perceived self-efficacy is just a reflection of prior performance” (p. 89). Bandura and Locke note that “perceived self-efficacy contributes independently to subsequent performance after controlling for prior performance and indices of ability” (p. 89). Bandura and Locke note that “the higher the 80 perceived self-efficacy to fulfill educational requirements and occupational roles is, … the better they prepare themselves educationally for different occupational careers, and the greater is their staying power in challenging career pursuit” (p. 90). Bandura and

Locke note that social cognitive theory specifies four core features of human agency

Bandura & Locke, 2003). Bandura and Locke (2003) provide these features, including a)

“intentionality”, b) “forethought”, c) “self-reactiveness”, and d) “self-reflectiveness” (p.

97). Students of learning goal orientation tend to make positive attributions for success and sustain their self-efficacy for learning (Bandura, 1993). Empowering students through self-efficacy training increases their self-efficacy (Reeve, 2005). Students’ perceived competence is increasingly with timely feedbacks, their perception of their own learning abilities improve (Bandura, 1997).

Quality of Instruction

On quality of instruction, Bloom (1976) suggests that “the lack of the necessary prerequisite cognitive entry behaviors for a particular learning task should make it impossible for the student to master the learning task requirements no matter how good the quality of instruction for that task” (p. 109). According to Bloom (1976), “the securing of the necessary prerequisites and the redefinition, redesign, or restructuring of the learning task or some combination of the two may overcome the initial lack of cognitive entry behaviors” (p. 109). Bloom hopes that “the quality of instruction can have powerful effects on learning over particular tasks” (p. 110). Bloom believes, the teaching rather than instructor is the central, and the “environment for learning in the classroom” and not “the physical characteristics of the class and classroom that is 81 important for school learning” (p. 111). Carroll (1963; as cited in Bloom, 1976) refers to quality of instruction as “the degree to which the presentation, explanation, and ordering of elements of the task to be learned approach the optimum for a given student” (p. 111).

Bloom notes that the definition assumes that each student can learn if the instruction approaches the optimum for him or her; however, Bloom suggests that “students differ in the qualities of instruction they need to learn a given task” (p. 112).

According to Bloom (1976), quality of instruction refers to “particular characteristics of the interaction between instruction and students” (p. 134). Bloom stresses that “cues-participation-reinforcement” are “the major characteristics in the instruction … and their effects on student learning” (p. 134). Bloom emphasizes that “the use of feedback and corrective procedures as one means of ensuring that each student gets good quality of instruction as he needs” “Because group instruction tends to be differentially sensitive and responsive to the students in a class” (p. 134). Bloom notes that “there is evidence in the macro-studies that the qualities of cues, participation, and reinforcement can account for at least 20 percent of the variation in the student learning”, whereas “in the micro-studies the evidence indicates that about 25 percent of the variation in student learning can be related to the improvement of the cues and participation through feedback and corrective procedures for each student” (p. 134).

Bloom suggests further research should be done “to determine the extent to which quality of instruction has long-term effects on both achievement and the learning processes of students” (p. 134). 82

Bloom (1976) maintains that “the level of participation” is one of “the strongest symptoms for quality of instruction” (p. 134). Bloom asserts “that this is the clearest indicator of the effectiveness of instruction” (p. 134). Thus, Bloom notes that “the extent and type of participation in the learning process … turns out to be the best single indicator of quality of instruction” (p. 134). Bloom finds “those students who have developed effective learning and study procedures will be less affected by variations in the quality of instruction than students who have developed less effective learning procedures” (p. 134). Similarly, Bloom finds that “more mature and capable students will be least influenced by varying qualities of instruction, while less mature or less capable students will be most influenced by quality of instruction” (p. 134).

Quality of instruction as a causal variable. According to Bloom (1976), instruction is “important in determining and influencing the students’ achievement” (p.

135). Bloom notes “that quality of instruction can account for at least one-fourth (r = +

.50) of the variance on relevant cognitive achievement measures” (p. 135). Bloom asserts that “quality of instruction is a causal link in determining learning and in accounting for educational achievement” (p. 135). Bloom believes “that quality of instruction, and especially the use or absence of feedback/corrective procedures, is effective in determining the extent to which two comparable groups of students will learn well or poorly in a given set of learning situations” (p. 135). Bloom notes that in a micro-study

“students who were similar at the beginning of instruction but were placed by chance in a mastery or non-mastery class (taught by the same teacher) became more and more differential in their achievement over each successive learning task” and “on the final 83 summative achievement measures” (p. 135). Bloom suggests that the “increasing differentiation between the two groups is suggestive of the causal effect of the instructional procedures on the student learning outcomes” (p. 135). Bloom states that

“quality of instruction has an effect on the learning processes of students and on their learning outcomes” (p. 135). Bloom stresses that “further research is needed to understand the causal effects of quality of instruction on cognitive development, learning processes, and affective development of students” (p. 136).

Achievement

Nature of the meaning of achievement. Maehr (1983) notes “achievement takes place on different meanings at different ages and is exhibited in different forms and in different places,” “there is a subjective side of achievement,” and “objective assessments of personality and situation are irrelevant to the study of achievement” (p. 190). Maehr provides some facets of the meaning of achievement, including a) judgments about one’s competence to perform; b) judgments about one’s role in initiating and controlling the performance; c) projected goals in performing; and d) perceived ways in which these goals can and should be reached (p. 190).

According to Dahllöf (1971), most research on “grouping and achievement is purely descriptive; the achievement level (dependent variable) is generally based on intellectual ability or standardized achievement tests” (p. 3). Dahllöf notes that

“grouping has little or no bearing on pupils’ achievement” (p. 3); however, Dahllöf provides no reason or no theory that explains this result. Dahllöf notes “the grouping system (independent variable) is directly related to the achievement level as measured 84 one or two school years after the grouping took place” (p. 4). Dahllöf notes that

“Teaching process data may supply information about methods of instruction, the time required for learning different units, and the elements of the curriculum” (p. 4). Dahllöf stresses that “achievement must never be regarded as a direct outcome of the grouping arrangements rather the actual teaching process, the general style of instruction, the teachers and their competence” (p. 4). Dahllöf suggests that some “aspects of the teaching process” should be used “as the dependent variable in future research on grouping”; and “the relation among grouping, process, achievement, and objectives must be considered more systematically in future research” (p. 4).

Dahllöf (1971) provides some methodological problems in interpreting trend results. These methodological problems are a) the “marked ceiling effects in some of the tests”, b) the “tests intended to measure the effect of two years’ differential grouping in the experimental group are identical with the tests used as control variables in the control group” (p. 18), c) the same curriculum objectives, d) the same number of hours and lessons a week and a year, and e) in “the traditional classroom type”, the teacher addresses “the whole class” (p. 21). Dahllöf concludes that “the same objectives may thus be accomplished through different educational processes” (p. 21).

Dahllöf (1971) discusses some general methods of teaching. These methods are a) the pupils are kept together and have to start every lesson with the same exercise, and the differences between individuals are smoothed out through homework; b) the pupils are kept together within the same curriculum unit, but some are allowed to work with the common core while others are working with additional problems or enrichment exercises; 85 c) the pupils are kept together in groups, each group working at its own pace; and d) the pupils are all allowed to work at their individual pace (p. 48). According to Dahllöf

(1971), the traditional type of classroom instruction was the dominant method of teaching in all school types concerned. Dahllöf notes that the new comprehensive schools did not differ from the traditional schools, but lack an appropriate teacher training; that all are done under the same conditions as far as the general methods of teaching are concerned

(p. 49).

Age

Although age has been identified as an important factor in educational achievement in general (Maehr, 1983), it has never been investigated as an important factor in inter-professional educational scales. Current scales assess readiness and attitudes more than achievement (Curran, Sharpe, Forristall, & Flynn, 2008; McFadyen,

Webster, Strachen, Figgins, Brown, & McKechnie, 2005; Parsell & Bligh, 1999).

History of a Student

According to Bloom (1976), the history of a student is the characteristics the student brings to a particular learning task or set of learning tasks. Bloom suggests that what happens to the student within the learning task(s) also becomes part of his history.

Bloom notes that quality of instruction is a major variable in determining the history of the student within a particular set of learning tasks. Bloom suggests that “an individualized approach to quality of instruction must accompany group qualities of instruction if most of the students are to learn more effectively than at present appears to be the case in countries throughout the world” (p. 136). 86

Alterability of Learning

According to Bloom (1976), “entry characteristics (cognitive and affective) may become resistant to marked change after the individual has accumulated a long history of experience with particular types of learning tasks” (p. 136). Bloom argues that “there is almost no point in the individual’s history when his learning characteristics cannot be altered either positively or negatively” (p. 137).

Bloom maintains that quality of instruction can be improved so that a larger proportion of students attain a level of mastery with consequent changes in their subsequent cognitive entry behaviors and affective entry characteristics. Bloom stresses that very poor qualities of instruction can have negative effects on current and subsequent learning. Bloom suggests that the proportion of students whose learning characteristics can be altered in a positive way is likely to be highest on the early learning tasks in a series and to be somewhat reduced the later the optimal qualities of instruction are introduced in the series. Bloom maintains that each learning series does not start at the beginning of school.

Bodner et al. (2014) trace a 25-year evolution in the practice of teaching chemistry; examine some implications of the constructivist theory of knowledge. In addition, Bodner et al. emphasize how a shift of instructor can be made from the direct model (direct methods) of the instructor as a teacher to a constructivist model (outcome- based or competency-based methods) of the instructor as a facilitator; a shift from instructor as a controller of teaching and learning process to an approach of instructor as a negotiator of teaching and learning process between the instructor and students. 87

Further, Bodner et al. (2014) describe direct approach to teaching as “a series of lectures in which scholars summarize the state of knowledge in their area of expertise” (p. 130).

Direct Instruction Approach

According to Felder and Brent (2005), the direct instruction approach favors intuitive, verbal, reflective, and sequential students. Students are taught within health care professions. They acquire profession-specific educational knowledge. Here the core set of competencies are based on learning outcome; for example, knowledge, skills, and attitude

(Barr, 2009). This lecture-based system is self-serving and creates barriers between professions, and impedes improvement of health care (IOM, 2001). Bunce (2014) notes that teaching and learning may be considered two distinct processes; when something is taught, the students will learn; and when students do not learn, they are not prepared for the course. Bunce reports that misconceptions about teaching and learning continue if the misconceptions support the current culture of teaching and learning (Bunce, 2014).

Haas (2002) conducts meta-analysis focusing “on research with methods for teaching secondary level algebra from 1980 to 2001” (p. 2). Haas samples “34 studies with 62 effect sizes, six categories for teaching methods and corresponding” effects

“sizes were derived for ‘good’ studies: direct instruction (.67), problem-based learning

(.44), technology aided instruction (.41), cooperative learning (.26), manipulatives, models, and multiple representations (.23), and communication and study skills (.16)”

(Haas, 2002, p. 2). Haas’ “Meta-and regression analysis results suggest that Algebra I teachers should emphasize direct instruction, technology aided instruction, and problem- based learning” (p. 3). The reason was that these three teaching method categories were 88 ranked highest in both analyses. It may appear from Haas’ findings that the direct instruction is one of the best instructional methods.

Student-Driven Participatory Approach

Several student-centered teaching approaches accomplish the goal of activity involving students in learning tasks. Felder and Brent (2005) note that student-centered teaching is called “active learning (engaging students in class activities other than listening to lectures)” and “cooperative learning (getting students to work in small teams on projects or homework under conditions that hold all team members accountable for the learning objectives associated with the assignment)” (p. 64).

Pedersen and Liu (2003) note, student-centered learning includes “case-based learning, goal-based scenario, learning by design, project-based learning, and problem- based learning” (p. 57). Pedersen and Liu (2003) provide the key differences between direct instructional approaches and student-centered approaches. These differences are

“goals, roles, motivational orientations, assessments, and student interactions” (p. 58).

Pedersen and Liu (2003) conduct a study to “identify some of the issues teachers are concerned with when implementing Allen Rescue, a computer-based program designed to support student-centered learning” (p. 72). Pedersen and Liu provide three considerations for designers to student-centered programs. These considerations are a)

“provide scaffolds for students with special needs”; b) “support factual knowledge acquisition”; and c) “capitalize on the multimedia affordances of computer technology to create new learning experiences for students” (text, graphics, audio, video, 3-D images, and animation) (p. 73). 89

Pedersen and Liu (2003) provide suggestions for professional development, including a) “avoid the use of the terms student-centered learning and facilitator, or carefully build a common definition with teachers”; and b) “address the many benefits of collaboration” (p. 74). Pedersen and Liu note that some potential benefits for collaboration. These benefits include a) “collaboration is an end in itself”; b)

“collaboration improved communication skills and better ideas”; c) “students developed an understanding the role of collaboration in scientific inquiry, the opportunity to identify misconceptions during collaborative exchanges, or the value of peer modeling for students on both ends of the dialogue” (p. 74). Pedersen and Liu suggest that

“professional development programs should focus on helping instructors recognize this impact and hone their skills in supporting collaboration that promotes a great deal more than just the development of social skills” (p. 74).

Splan, Porr, and Broyles (2011) conduct an “interpretivist qualitative study … to analyze the epistemological underpinnings of constructivism” (p. 56). Splan et al. (2011) found two “pedagogical principles of constructivist learning theory” (p. 56). These principles are “learning should be authentic, active, and student-centered”; and learning

“must also be facilitated through social negotiation” (interaction) (Splan et al., 2011, p.

56). Splan et al. maintain that both “factors are inherent in the learning process when faculty mentors scaffold the creation of new knowledge via undergraduate research” (p.

56). In contrast, Sefton (2009) notes that in the outcome-based education, faculties emphasize what they expect students will achieve when they complete their course. 90

Bodner et al. (2014) describe a student-centered class as being interactive. The focus of student-driven participatory approach is to expose the students to a topic, but not necessarily to master the topic. Bodner et al. note that the instructor in the student- centered class maximizes the students’ involvement and minimizes his/her involvement.

The instructor introduces a topic, and then provides the whole class or small-group an activity. The instructor devotes his/her time giving direction and clarification, and provides immediate feedbacks to the students.

The goal of student. In direct approach, students solve problems the teacher set

(Felder and Brent, 2005). However, in student-centered learning, students set and solve their own problems (Pedersen & Liu, 2003). Here students decide on what they need to do and know to ask the question. Pedersen and Liu suggest that student-centered promote student autonomy on learning process than do direct approach (Pedersen & Liu,

2003).

The role of the instructor. In direct approach, the instructor determines the learning objectives, and instructional activities to students’ learning (Pedersen & Liu,

2003). Here the instructor guides or directs the students through a well-designed process to ensure error free. In the student-centered learning, the instructor poses the central questions (issues, cases, problems), and then students determine the processes, then formulate hypotheses, and then solve the problems. Instructors coach those students who asked for assistance to identify alternative paths or resources for solving the problems.

Felder and Brent (2004) describe an alternative and more effective instructional approach. This approach is focus on teaching inductively that presents problems, then 91 teaches everything they need to know, and then teach the required material (Felder &

Brent, 2004). Felder and Brent provide various approaches that lay emphasis on teaching inductively. These approaches include problem-based learning, inquiry-based learning, , need-to-know learning, and just-in-time learning (Felder & Brent,

2004). Felder and Brent report that instructors are less comfortable with these methods; and students are distressful because they may not appreciate working with unfamiliar problems. Felder and Brent (2004) stress that students who receive inductive teaching may lead to mastery of knowledge and skills expected. Felder and Brent (2004) note that inductive presentation is not concise and prescriptive.

The introduction of interdisciplinary health education leads to the reform of the existing curriculum. The existing curriculum has been restructured to include the selected core competencies. The resulting curriculum aims at preparing all health professions students for inter-professional collaborative learning and practices. This means that health professions students will be working in teams with a common goal to solve problems and to make team decisions of health care. Here the core set of competencies are based on learning process and outcome namely, knowledge, skills, attitude, and values. According to McNair (2005), this system enables health professions students to develop and express values of trust. In the interdisciplinary education system, students are to learn from different professions through interaction with each other. In school, they are to work effectively in teams doing their clinical work.

IPEC (2011) provides some principles for the inter-professional competencies.

The principles include a) “patient/family centered”; b) “community/population oriented”; 92 c) “relationship focused”; and d) “process oriented” (IPEC, 2011, p. 2). Other principles are a) “linked to learning activities, educational strategies, and behavioral assessments that are developmentally appropriate for the student”; b) “able to be integrated across the learning continuum”; c) “sensitive to the systems context/applicable across practice settings”; d) “applicable across professions”; e) “stated in language common and meaningful across the professions”; and f) “outcome driven” (IPEC, 2011, p. 2).

Student-centered instructional environment. A student-centered instructional environment should include a) “inductive learning (problem or project based learning, guided inquiry)”, b) “active and cooperative learning”, and c) “measures to defuse resistance to student-centered instruction” (Felder & Brent, 2005, p. 67). Felder and

Brent (2004) define three instructional approaches as a) “active learning (getting students to do things in class that actively engage them with the material being taught), b) cooperative learning (putting students to work in teams under conditions that promote the development of teamwork skills while assuring individual accountability for the entire assignment), and c) problem-based learning and similar approaches (teaching material only after a need to know it has been established in the context of a complex question or problem, which increases the likelihood that the students will absorb and retain it)”

(Felder & Brent, 2004, p. 1).

Problem- or case-based learning. According to Felder and Brent (2004), formal problem-based learning focuses on providing students with significant problems requiring the knowledge and skills solutions, and then working in teams following these approaches: a) “define the problem”; b) “build hypotheses to initiate the solution 93 process”; c) “identify what is known, what must be determined, and what to do”; d)

“generate possible solutions and decide on the best one”; e) “complete the best solution, test it, and either accept it or reject it and go back to step (b)”; and f) “reflect on lessons learned” (Felder & Brent, 2004, p. 6). Problem-based learning “is a proven strategy for learning”; “a faculty is a facilitator”; and the goals include “self-directed learning, scientific and clinical reasoning, communication and teamwork” (Sefton, 2009, p. 179).

The instructor is as a consultant, lecturing as when the students demand in the process of the context of the problem (Felder & Brent, 2004). Students set learning goals and discuss them in teams across the disciplines (Sefton, 2009). According to Locatis

(2007), problem-based learning exposed “students a rich array of real and simulated patient cases” (p. 200). Cases were presented; and students worked in groups, assessed the patient’s problems, generated hypotheses, gathered data, independently researched, and discussed information bearing on the case.

Evidence-based medicine involves systematic processes. Formulating clinical questions is one of the systematic processes. Other systematic processes include

“…finding evidence in the medical literature that addresses the questions, critically appraising the evidence, and applying the evidence to specific patients …” (Locatis,

2007, p. 201).

Project-based learning. A related but less formal instructional approach is project-based learning. Here, the learning takes place in the context of projects, and the lectures play a subsidiary role or not take place at all (Felder & Brent, 2004). Each experiment is considered a project. Felder and Brent (2004) report that adopting either 94 project-based learning or problem-based learning, all of the cooperative strategies can be used to teach teams of students for optimum effectiveness.

The committee notes that health professions education face serious and pervasive nature of the quality problems (Greiner & Knebel, 2003). IOM committee aims at solving the existing quality problems. The committee maintains that there exists absence of interdisciplinary forums, the academic environments of the various health professions lack interdisciplinary practice environments, the practice environments requires interdisciplinary teamwork, and “the environments are seriously disconnected” (Greiner

& Knebel, 2003, p. ix). From what the committee has experienced at the collaborating interdisciplinary summit, they note that there have been overlap and fusion of roles. The committee believes that collaboration among clinicians in practice setting may draw strength from each profession, and then may lead to maximum care for patients. The committee further believes that the same can happen in the health professions education if collaborative approach to educational reform is embraced. The committee designs how the health education system must be radically transformed, what has been known to be good quality care, and what actually exists in practice? (Greiner & Knebel, 2003) The committee notes that there is a need for skilled personnel in order to advance quality.

The committee recommends:

a mix of approaches related to oversight processes, the training environment,

research, public reporting, and leadership. The recommendations targeting

oversight organizations include integrating core competencies into accreditation,

and credentialing processes across the professions. The goal is an outcome-based 95

education system that better prepares clinicians to meet both the needs of patients

and the requirements of a changing health system. (Greiner & Knebel, 2003, p. 1)

The new vision for health professions education is “All health professionals should be educated to deliver patient-centered care as members of an interdisciplinary team, emphasizing evidence-based practice, quality improvement approaches, and informatics”

(Greiner & Knebel, 2003, p. 3 in italics).

Felder and Brent (2000) suggest that students can be engaged at remote sites to collaborate on problem sets or projects. Instructor should organize virtual teams and set them up to interact electronically using any tools available. Felder and Brent stress that instructors should not ask students to do something in groups; that this is not enough for effective learning. Felder and Brent remark that students in direct classes may do little or no work but get the same grade as those who are industrious; that this behavior brings serious conflicts between teammates; that the problem may be more serious when the groups are in the virtual. At virtual, there is no self-regulating capability as face-to-face.

Felder and Brent note that the defining principles of cooperative learning should be adhere to in distance classes; offer criteria for cooperative learning and tips for making group-work. The tips and criteria include a) make it clear to the students why group- work is required, b) form small teams that are balanced in knowledge and skills, c) give clear directions regarding both the assignments and the communication tools, d) monitor team progress and be available to consult when teams are having problems, e) intervene when necessary to help teams overcome interpersonal problems, f) collect peer ratings of 96 individual citizenship and use the rating to adjust the team assignment grades, and g) anticipate problems and get help when necessary (Felder & Brent, 2000, p. 47).

Impact of the Different Approaches for Inter-Professional Healthcare Classroom

Sargeant (2009) reports, “education is one way to increase collaboration and communication, and it is an explicit goal of inter-professional education (IPE)” (p. 178).

Sargeant notes that IPE is “socially created through interactions with others and involves unique collaborative skills and attitudes” (p. 178). IPE involves “thinking differently about teaching and learning” (p. 178). Sargeant notes that situated learning and communities of practice are fitting models for professionals (Sargeant, 2009). Sargeant provides lessons for practice, including a) “IPE is learning about how to work together and the roles of others”; b) “social theories of learning demonstrate that the IPE curriculum is best learned through interaction and collaborative knowledge creation”; c)

“social theories also show that IPE content needs to address barriers posed by stereotypes, social identity, and professional socialization”; d) “reflection upon learning and practices is integral to IPE”; and e) “continuing IPE is transformative learning” (p.

183).

DiGiovanni and McCarthy (in press) observe that “students came into IP class rating their skills as high, with the exception of two areas-informatics and quality improvement, in which students started with a much lower baseline” (p. 37). DiGiovanni and McCarthy note that coordinating team scheduling was a challenge for students who preferred working in teams; and that through using synchronous and asynchronous online technologies-Google Docs the problem was solved. DiGiovanni and McCarthy discuss 97 tools for inter-professional competencies that are self-reported instruments. These instruments included the Inter-professional Educational Collaborative (IPEC)

Competency Survey Instrument and the Readiness for Inter-professional Learning Scale.

DiGiovanni and McCarthy suggest that a) do something; b) look for funding opportunities to scale the projects; c) start small, and with individual leadership; and d) be flexible in how various professions participate (DiGiovanni & McCarthy, in press, p.

50). DiGiovanni and McCarthy conclude that “students are most attracted to interesting questions and projects that allow them to feel like the health care professionals they want to be and to tackle patient-centered issues” (p. 50). DiGiovanni and McCarthy note that

“Classroom experiences should be project centered and grounded in major issues such as core competencies, with a greater focus on execution and less emphasis on the acquisition of content knowledge” (p. 50). DiGiovanni and McCarthy suggest that “administrators will tend to work within existing hierarchies;” and stress that “IPE program work involve significant collaboration, trust, and some sacrifice in order for such novelties to become sustainable” (p. 50).

Learning: Not a Linear Process

According to Brooks and Brooks (1999), the reasons for moving from the principles that constructivist-based education rests include a) “developing high standards to which all students will be held”; b) “aligning curriculum to these standards”; c)

“constructing assessments to measure whether all students are meeting the standards”; d)

“rewarding schools whose students meet the standards”; and e) “punishing schools whose students do not meet the standards” (p. vii). Brooks and Brooks report that learning is a 98 complex process that violet the linear function because students internally construct understandings about the worlds they perform (Brooks & Brooks, 1999). Brooks and

Brooks stress that “the quality of learning environment is not a function of where the students end up at testing time or how many students end up there” (p. vii). Brooks and

Brooks note that the “dynamic nature of learning makes it difficult to capture on assessment instruments that limit the boundaries of knowledge and expression” (p. vii).

A Process of Making Personal Meaning

According to Brooks and Brooks (1999), in a participatory approach, the instructor assesses students’ work for understandings and then provides opportunities for students to edit their work, “posing contradictions, presenting new information, asking questions, encouraging research, and /or engaging students in inquiries designed to challenge current concepts” (p. ix). Brooks and Brooks provide five overarching principles evident in participatory approach. These principles are a) “instructors seek and value their students’ points of view”; b) “classroom activities challenge students’ suppositions”, c) “instructors pose problems of emerging relevance”, d) “instructors build lessons around primary concepts and big ideas”, and e) “instructors assess student learning in the context of daily teaching” (pp. ix-x).

Choosing the Participatory Approach

According to Brooks and Brooks (1999), educational settings that encourage the active construction of meaning have several characteristics. These characteristics include a) “they free students from the dreariness of fact-driven curriculums and allow them to focus on large ideas”; b) “they place in students’ hands the exhilarating power to follow 99 trails of interest, to make connections, to reformulate ideas, and to reach unique conclusions”; c) “they share with students the important message that the world is a complex place in which multiple perspectives exist and truth is often a matter of interpretation”; and d) “they acknowledge that learning, and the process of assessing learning, are, at best, elusive and messy endeavors that are not easily managed” (pp. 21-

22).

Direct Instruction versus Participatory Instruction

Interest in instructional approaches to teaching and learning has increased dramatically in the recent educational research. One consequence of such interest has been the move from the traditional teaching approaches such as teacher-centered instruction, didactic approach, direct instruction to alternative outcome-based teaching approaches such as student-centered instruction, and participatory learning instruction

(Rich, 2010). In order to differentiate among these instructional approaches, it is necessary to be aware of philosophical assumptions that shape the processes of teaching and learning. The purpose of this section is to compare direct instruction and participatory learning instruction. This section will provide the definition, the strengths and weaknesses of direct and participatory instructions, and then will use the elements of the worldviews to distinguish the two instructional types.

Direct instruction. Direct instruction is defined “as teaching through establishing a direction and rationale for learning by relating new concepts to previous learning”, “leading students through a specified sequence of instructions based on predetermined steps that introduce and reinforce a concept”, and “providing students with 100 practice and feedback relative to how well they are doing” (Haas, 2002, p. 108).

According to Hattie (2009), direct instruction is referred to as unguided approach, mastery learning, behavioral organizer, reciprocal teaching (p. 243); and the role of a teacher in the direct instruction is an activator. Hattie synthesizes research based on the main effects of what happens in the classrooms. Hattie suggests a need for “a barometer of what works best, and notes that such barometer can also establish guidelines as to what is excellent” (p. ix). Hattie provides some challenges in direct instruction schools. These challenges include “everything seems to work” (p. 1), “teachers teach differently”,

“teachers talk rarely about their teaching, laws concern with school structure (class size, school choice, and social promotion are the top-ranking influences on student learning)”,

“school-based decisions such as ability grouping, detracking or streaming, and social promotion influence achievement” (Hattie, 2009, p. 1). However, Adelman (1995) and

Curtis and Stollar (1996) note, the efforts of school reform reflect a participatory model using a team-based approach.

Participatory intervention model. According to Nastasi and Varjas (1998), “a participatory intervention model is the best characterized by a collaborative process in which the partners together create interventions to facilitate individual and cultural change” (p. 260). The intervention participatory model consists of three developmental components with centered-partnership. These components are “the participatory program design (the stage during which program goals are identified and the intervention is planned)” (p. 261), “the participatory program implementation (the stage involves the modification of the culture-specific model to fit the needs and resources of specific 101 contexts)” (p. 261), and “the participatory program evaluation (the stage examines the impact of the intervention)” (p. 262). Nastasi and Varjas note that “a participatory intervention model helps practicing students to engage in the best practices, integrate seemingly different roles, conduct practice as research, and contribute to the existing body of knowledge in their field”; and that “a participatory intervention bridges the gap between research and practice and between the worlds of academic and the practitioner”

(Nastasi & Varjas, 1998, p. 273).

Participatory learning focuses on a participation ‘plus’ pedagogy model; knowledge and insight are from diverse fields (Hedges & Cullen, 2012). According to

Kenny and Wirth (2009), participatory learning practices are more descriptive than prescriptive in nature (p. 35). The researcher negotiates reality, and collaborates with other participants (Creswell & Plano Clark, 2011). According to Creswell and Plano

Clark (2011), there are four worldviews, including “post-positivism”, “constructivism”,

“participatory worldviews”, and “pragmatism” (pp. 40-41). These worldviews covered five key elements. These elements include “ontology (nature of reality), epistemology

(the knowledge), axiology (the role values), methodology (the process), and rhetoric (the language)” (Creswell & Plano Clark, 2011, p. 42). Weaver and Cousins (2004) also provide four categories of worldviews, including epistemology, practice or pragmatism, emancipatory, and deliberative democracy. These worldviews have received much attention in the area of teaching and learning in education.

While critics note that the approach to participatory evaluation can bias the results of findings due to the subjectivity nature of the decision making (Brisolara, 1998), 102 political reality, democratic and negotiation of the results (Creswell & Plano Clark,

2011), the proponents maintain that stakeholders engagement in the participatory inquiry can reduce the biases (House, 2005), but House agrees that the elimination of subjectivity during a participatory approach is impossible. Also, Ryan, Greene, Lincoln, Mathison, and Mertens (1998) assert that the involvement of the stakeholders in the participatory approach may encourage the acceptance of the reality of the results and their utilization; however, Ryan et al. (1998) note that the time constraints for the evaluation process may lead to insufficient training of the evaluators causing stakeholders to doubt the quality of the results. Further, King (1995) also notes the lack of experience, apathy, dependency on experts, and the lack of time as the challenges to the participatory evaluation approach. Furthermore, Diagneault and Jacob (2009) note that participatory approaches have several concepts and few satisfactory operations; and provide some reasons for these challenges, namely a) multiple labels (collaboration, stakeholder involvement, interaction, and democracy), b) inadequate theories from an ontological perspective

(normative or prescriptive perspective), and c) knowledge claims that are from case studies reports of practices (Diagneault & Jacob, 2009, p. 331). Laudon (2010) notes that participatory approach expands the normative objectives of democratic participation into the realm of evaluation; it increases evaluation use, builds stakeholder learning and capacity, and increases evaluation and research (p. 1). Wezemael, Verbeke, and

Alessandrin (2012) provide strengths for the participatory approaches, such as dialogue, flexible method, multidisciplinary study, and public involvement; however, Wezemael et al. (2012) note several “weaknesses, such as quality participants and time-consuming 103 nature, and threats, such as lack of interest among policymakers and competing methods”

( p. 121). The next section is on competency-based learning.

Competency-Based Learning

In this section, concepts of competency have been defined, reasons for institutions opting for competency-based learning have been provided, and studies on competency- based learning have been discussed. According to Epstein and Hundert (2002), professional competence refers to as “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served” (p. 226). Epstein and Hundert note that competence is built “on a foundation of basic clinical skills, scientific knowledge, and moral development”; that “it includes a cognitive function, an integrative function, a relational function, and an affective or moral function”; and that it depends on “attentiveness, critical curiosity, self-awareness, and presence” (pp. 226-227).

Epstein and Hundert suggest a strong monitoring system, curricula change, and a more comprehensive assessment format should be employed.

Carraccio, Wolfsthal, Englander, Ferentz, and Martin (2002) found that descriptive study has defined competencies and outlined processes to create competency- based curricula; that assessment tools for assessing competencies was inadequate.

Carraccio et al. (2002) suggest that educators should define and study the outcomes leading to the shift to competency-based education; that this insight will be helpful knowing “whether competency-based training produces more competent physicians, and whether the paradigm shift of the new century is as significant as the Flexnerian 104 revolution” (p. 366). Carraccio et al. (2002) found that in the 1970s and 1980s more attention was focused on the need and development of medical professional competencies, but less focus was on specific competencies standards, how to attain them, or to evaluate the competencies.

Competence has several definitions. Ende, Kelley, and Sox (1997) state that “the challenge of defining the competencies of general internist has brought the internal medicine community together, to speak with a single voice about internal medicine’s commitment to residents, patients, and society” (p. 457). Here are some definitions of competence as following.

Competence. Competence refers to “a person’s capacity to perform his or her job function” (Kelly-Thomas, 1998, p. 74).

Professional competence. According to Greiner and Knebel (2003), professional competence refers to “as the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and community being served (Hundert, Hafferty, &

Christakis, 1996)” (p. 24).

Competency. Competency refers to “a person’s actual performance in his or her specific job function or specified task” (Kelly-Thomas, 1998, p. 74).

Competency-based education. Competency-based education refers to

“educational programs designed to ensure that students achieve prespecified levels of competence in a given field or training activity” (Greiner & Knebel, 2003, p. 24). 105

Core competency. A core competency refers to “the identified knowledge, ability, or expertise in a specific subject area or skill set that is shared across the health professions” (Greiner & Knebel, 2003, p. 24).

Competency areas. Competencies or competency areas refer to “those identified skills considered necessary to perform a specific job or service” (Kelly-Thomas, 1998, p.

74).

According to Barr (1998), a competent practitioner will a) “contribute to the development of the knowledge and practice of others”; b) “enable practitioners and agencies to work collaboratively to improve the effectiveness of services”; c) “develop, sustain, and evaluate collaborative approaches to achieving objectives”; d) “contribute to the joint planning, implementation, monitoring and review of care interventions for groups”; e) “coordinate an interdisciplinary team to meet individuals’ assessed needs”; f)

“provide assessment services on individuals’ needs so that others can take action”; and g)

“evaluate the outcome of another practitioner’s assessment and care planning process” (p.

183).

Reasons for competency-based learning. According to Barr (2009), competency-based learning is an educational strategy. This educational strategy is based on the core competencies. In competency-based learning, the unit of learning is a module. A module consists of one competency. One competency is a small component of a larger learning goal. Students work on one module at a time. Students are assessed on each individual competency for mastery. A student moves on to the next competency after demonstrating the mastering of the previous module. A student can skip learning 106 modules that are already mastered through formative assessment. The students control their own study and learn at their own pace. Competency-based learning is a student- centered approach. It emphasizes mastery learning throughout the learning modules and requires on-going assessments. The assessment strategies include self-assessment and multi-source feedback. The self-assessment technique involves using common rating scale for assessing each indicator. The multi-source involves peer feedback, team members’ feedback, and clients’ feedback. The faculty may be a facilitator. It has been recognized as an effective learning strategy. It helps to enhance the student’s knowledge, skills, behaviors, and abilities (Barr, 2009).

Competency-based inter-professional education is based on several needs (Barr,

1998). Barr (1998) states that the competency-based inter-professional education should a) “reposition inter-professional education in the mainstream”, b) “enable students to relate professional and inter-professional studies coherently”, c) “enable students on inter-professional courses to claim credits as part of their professional education”, d)

“gain the approval of validating bodies”, e) “attract support from employers”, f)

“compensate for deficits in existing models of inter-professional education”, g) “equip professionals for multi-dimensional collaboration”, and h) “respond to renew government calls for such collaboration” (pp. 182-183).

Reasons for competency-learning being important. Several studies (Barr,

2009; Hundert et al., 1996; IPEC, 2011; Prideaux, 2009; Smith, 2009) provide reasons for competency-based learning being important. 107

Barr (2009) reports that much “that you bring from medical education will be readily transferable to inter-professional education, but teaching a class drawn from a range of professions is challenging” (p. 191). Barr notes that the assumptions, perceptions, expectations, and experiences are different from each profession. Barr stresses that tensions may be disturbing, but on reflection it may be understood as opportunities; and that co-teaching may be encouraging. Barr provides the learning methods frequently use in IPE, including a) “exchange-based learning (e.g., debates and case studies)”; b) “action-based learning (e.g., problem-based learning, collaborative enquiry, and continuous quality improvement)”; c) “observation-based learning”; d)

“simulation-based learning”; e) “practice-based”; f) “e-learning”; g) “blended learning”; and h) “received or direct learning” (Barr, 2009, pp. 189-190). Barr reports that the effectiveness of IPE should not be generalization because it takes varied delivery methods to achieve overlapping outcomes. These overlapping outcomes include a)

“individual learning for collaborative practice”, b) “group or team-based learning for collaborative practice”, and c) “learning to effect change and service improvement” (p.

190).

Reasons for moving to competency-based learning. Several studies (Barr,

2009; Cahn, 2014; Epstein & Hundert, 2002; Felder & Brent, 2000; Leach, 2000; WHO,

1988) provide reasons for institutions and medical schools are moving to competency- based learning.

According to WHO (1988), IPE can a) “develop the ability to share knowledge and skills collaboratively”, b) “enable students to become competent in teamwork”, c) 108

“de-compartmentalize curricula”, d) “integrate new skills and areas of knowledge”, e)

“ease inter-professional communication”, f) “generate new roles”, g) “promote inter- professional research”, h) “improve understanding and cooperation between educational and research institutions”, i) “permit collective consideration of resource allocation according to need”, and j) “ensure consistency in curriculum” (pp. 16-17).

Cahn (2014) notes that IPE focuses on educating “students from different professions together”; that IPE “does not emerge naturally”; that IPE requires “students to take common courses”; and that instructors are to create “inter-professional learning activities” (p. 128). Cahn provides two obstacles facing IPE in health professions education, including (a) forces internal to an institution that prevent coordination and (b) external forces that seek to maintain professional boundaries (Cahn, 2014). Cahn notes that scheduling, skill levels, and administrative support may be challenges professions students faced learning together. Other problems are health care policy, accrediting standards, and curriculum. Common courses IPH professions include health informatics, ethics, statistics, and research methods (Cahn, 2014). Students lack team health care communication skills, and instructors may also feel doing extra work.

Critiques for competency-based learning. Barr et al. (2008) provide some common issues of competency-based learning. These issues include how the professions respond through their licensing, accrediting, and certification processes. The committee identifies issues concerning lack of language and core competencies, integration of core competencies, motivation and support, curricula and teaching approaches, and training of faculty as experts (Greiner & Knebel, 2003, p. 156). 109

Solutions to the critiques for competency-based learning. According to

Greiner and Knebel (2003), language and core competencies, interdisciplinary working group has been tasked to develop a common language, and to define common core competencies across professions (p. 158). For integration of core competencies into oversight processes, working group has been formed for a) establishing “communication links among regulators”, b) defining “accreditation standards”, c) requesting for additional “competencies to licensing exams”, and d) developing “model processes related to the competencies” (Greiner & Knebel, 2003, p. 160). For motivation and support leaders, leadership and monitoring working groups have been formed for a) developing and making “use of fact sheets and case studies”, b) promoting “the overarching vision to the leadership of key organizations”, c) monitoring, evaluating, and communicating progress against this vision, d) making case to sponsors, e) creating and supporting leadership development skills programs, f) creating and funding “fellowships for formal leadership courses”, and g) charging “IOM to create a national award related to implementation of the overarching vision” (Greiner & Knebel, 2003, pp. 162-163).

Greiner and Knebel (2003) note that evidence-based curricula and teaching approaches, working groups are formed to a) “promote the link between education and quality within leading health professional organizations”, b) “ask the national quality forum to examine the relationship between education and quality”, c) “strengthen the focus of fellowships on the overarching vision”, d) “investigate and identify information systems that support evidence-based education”, and e) “establish an Interdisciplinary

Health Professional Education organization to identify and evaluate education models” 110

(Greiner & Knebel, 2003, pp. 165-166). For faculty development, working group has been formed for a) identifying “cross-cutting faculty competencies”, b) helping “organize an inventory of best practices and resources for faculty development”, c) creating

“program to recognize ‘educational scholars’ on national basis”, d) developing and disseminating “online self-instructional lessons or courses in faculty development related to the overarching vision”, and e) helping “develop models for reform of criteria for promotion and related compensation” (Greiner & Knebel, 2003, p. 168). To include 21st century skills into the teaching and learning medical institutions are moving to competency-based learning. The next section is about collaborative learning.

Collaborative Learning

This section, collaborative learning, provides definition, strengths and barriers for collaborative learning. There are several studies (Barr, 1998; Cardellini, 2014; Felder &

Brent, 2004) that define collaborative learning. For example, Barr (1998) refers to collaborative competence as the ability to a) “describe one’s roles and responsibilities clearly to other professions and discharge them to the satisfaction of those others”; b)

“recognize and observe the constraints of one’s own roles, responsibilities, and competence, yet perceive needs in a wider framework”; c) “recognize and respect the roles, responsibilities, competence and constraints of other professions in relation to one’s own, knowing when, where and how to involve those others through agreed channels”; d) “work with other professions to review services, effect change, improve standards, solve problems and resolve conflicts in the provision of care and treatment”; e)

“work with other professions to assess, plan, provide, and review care for individual 111 patients, and support carers”; f) “tolerate differences, misunderstandings, ambiguities, shortcomings, and unilateral change in other professions”; g) “enter into interdependent relationships, teaching and sustaining other professions and learning from and being sustained by those other professions”; and h) “facilitate inter-professional case conferences, meetings, team working, and networking” (Barr, 1998, p. 185).

According to Felder and Brent (2004), collaborative learning refers to “two or more students working together on an assignment or project” (p. 4). Collaborative learning in the classroom promotes students’ learning with understanding (Devetak &

Glazar, 2014). Devetak and Glazar (2014) report on various aspects of influences on collaborative learning. These aspects include students’ culture, race, ethnicity, and social background. Collaborative learning uses small-group activities in the classroom settings.

In collaborative learning, the roles assign to students are fewer; the group tasks are open- ended, and complex; and the faculty is not the center of authority.

Strengths for collaborative learning. There are several strengths for collaborative learning. For example, Felder and Brent (2004) note that comparing

“students working individually, students working on well-functioning teams in a course learn more, learn at a deeper level, are less likely to drop out, and develop more positive attitudes toward the course subject and greater confidence in themselves” (p. 4). Felder and Brent report that most familiar problems with collaborative learning involve dominant students, students who do little, students who are deliberately excluded, and interpersonal conflicts. Felder and Brent stress that teams should quickly resolve these problems, but if they cannot, then the members may well be better off working 112 individually. Felder and Brent complain that such situations often escalate quickly and should be controlled when they occur.

Barriers for collaborative learning. There are several barriers to collaborative learning (IPEC, 2011). For example, according to IPEC (2011), one of the barriers is institutional level challenges. The others include “lack of institutional collaborators, practical issues, faculty development issues, assessment issues”, and “lack of regulatory expectations” (pp. 34-35). Hall and Weaver (2001) note differences in roles of team members as a barrier.

Teamwork. Working in teams requires students to engage in these processes, namely, teamwork (working collaboratively), communication (communicating the work of the group to outsiders), and reasoning (reasoning to solve problems) processes

(Schanks et al., 1993). Obstacles in collaborative team learning are “selecting compatibility team members and defining team rules, choosing group activities that benefit from different viewpoints and experiences of team members, and using discussion strategies that support deeper learning among team members” (Trilling & Fadel, 2009, p.

114).

Teamwork implications. For education, the curriculum planers should include content and opportunities that foster effective IP teamwork into prelicensure and continuing education so that health professionals study together, “with, from, and about each other” and their respective roles (Sargeant, Loney, & Murphy, 2008). For accrediting bodies, educational interventions should be aligned to promote teamwork, and to integrate content from other disciplines formally (Sargeant et al., 2008). 113

McCallin (2005) notes, collaborative health care practice may be universal, parallel, consultative, coordinated, multidisciplinary, interdisciplinary, and integrative.

McCallin notes, the context may alter how the concept may be understood; what may work well in one service may not be easily practice elsewhere. McCallin maintains that understanding others’ roles, effective communication, working in teams effectively are some challenges for collaborative teamwork. McCallin asserts that leadership, resources, and the organizational environment also impact new approaches. McCallin reports that health professionals working well together improve client outcomes and job satisfaction; that health professionals communicating and collaborating effectively may benefit the patient and provider. McCallin argues that developing IP practice requires a commitment to sharing and communicating ideas; and that dialogue fosters IP learning through negotiating meaning and rediscovering deeper meanings for collaboration (McCallin,

2005). The next section is on cooperative learning.

Cooperative Learning

This section starts with definition, and dwells on strategies, forms, elements, strengths, barriers, and implications of cooperative learning. Several studies (Bodner et al., 2014; Cardellini, 2014; Chiu, 2004; Felder & Brent, 2003, 2004, 2007; Oakley et al.,

2004; Slavin, 1990) define and describe cooperative learning. For example, Felder and

Brent (2007) refers to cooperative learning as “students working in teams on an assignment or project under conditions in which certain criteria are satisfied, including that the team members be held individually accountable for the complete content of the assignment or project” (p. 1). Chiu (2004) conducts a study that tested a model of faculty 114 interventions (TIs) in cooperative learning. Chiu examines how adaptive faculty interventions were effective and efficient on problem solving. Chiu’s results show that faculty initiated most TIs when students’ performance decreased; that TIs can increase problem solving if faculty evaluates students’ work (Chiu, 2004).

Felder and Brent (2003) provide the principal methods of assuring individual accountability in cooperative learning. These methods include a) giving “individual examinations covering every aspect of the assignment or project”, b) designating “team member that presents the oral project report”, c) collecting “peer ratings of team”, d) constructing “weighting factors”, and e) applying weights “to team assignment grades to determine individual assignment grades” (Felder & Brent, 2003, p. 24). Felder and Brent

(2004) suggest conditions for maximizing the benefits of teamwork through cooperative learning. These conditions include “positive interdependence, individual accountability, face-to-face interaction, facilitation of interpersonal skill development, and periodic self- assessment of team functioning” (Felder & Brent, 2004, p. 5). Felder and Brent note that cooperative learning is a subset of collaborative learning. To implement cooperative learning effectively requires knowing how to form teams and resolve teamwork problems, the time to allow teams to dissolve and the methods for forming new ones, the ways for structuring assignments for positive interdependence and individual accountability, and the ways to resolve the resistance, and the hostility.

Oakley et al. (2004) recommend that three- and four-person teams are appropriate for effective teamwork. Oakley et al. suggest that instructor should “remind the class about the team policies” (p. 26) and the actions to take when team members are non- 115 cooperative (hitchhikers). Instructor should create opportunities for the groups develop the attributes associated with high-performance teams (Oakley et al., 2004). Oakley et al. note that cooperative learning focuses on reducing the common students’ attitude that the instructor is the only source of truth and wisdom. Faculty should inform the students to consult three different sources of information before coming for help. Cooperative learning has shown strong positive effects on almost every learning outcome (Oakley et al., 2004).

Bodner et al. (2014) describe cooperative learning as a situation that one individual can only attain one’s goals when other group members also achieve their goals. Bodner et al. note that cooperative learning may “improve student achievement, enhance students’ self-esteem, increase the use of higher-order cognitive skills, improve both cross-sex and cross-ethnic relationships, and reduce science and math anxiety” (p.

142). Bodner et al. suggest that cooperative learning must not be viewed as a threat to the faculty; and introducing cooperative learning into the classroom can be a movement from the whole-class to a small-group instruction, or a movement toward a student- centered classroom. Bodner et al. provide strategies for introducing cooperative learning into the classroom. These strategies include discussion questions; questions the students can answer. Bodner et al. (2014) emphasize that the significant element in cooperative learning is to create an interactive environment.

Cardellini (2014) analyzes problem solving using cooperative learning in chemistry classroom at the university level. Cardellini presents cooperative learning as an instructional method. This instructional method employs five criteria, including high 116 interdependence, individual accountability, class interaction, development and appropriate use of interpersonal skills, and self-assessment of group functioning

(Cardellini, 2014). According to Cardellini (2014), cooperative learning assists students developing teamwork, managing conflict, and acquiring leadership skills that may be necessary for successful professional and personal lives.

According to Slavin (1990), in the 1970s and 1980s cooperative learning has been examined as an instructional methodology in undergraduate and graduate education courses. In cooperative learning, most instructors use “discussion groups, project groups, lab groups, or peer tutoring” (Slavin, 1990, p. xi). Evidence shows that cooperative learning strategies are effective, provide wide outcomes, enhance achievement, and improve intergroup relations (Slavin, 1990). Simulation is an example of cooperative learning methods. Using cooperative methods to teach about cooperative learning is logical (Slavin, 1990). Slavin notes that “we remember far more about things we personally experience than about things we only hear or read about” (p. xii).

Cooperative learning set up competition between students. It can be healthy and effective when it is well structured. Low achievers may lack the prerequisites to learn new materials, get negative feedbacks on their academic efforts. Students work together cooperatively in four-member teams to master materials on map reading, and then take individual quizzes on the map reading. The scores of the students quiz are added up.

The teams having high average scores receive special recognition. According to Slavin

(1990), the rationale of cooperative learning is that “students will succeed as a team, they will encourage their teammates to excel, and they will help them to do so” (p. 2). 117

Cooperative learning methods. Slavin (1990) notes that “all cooperative learning methods share the idea that students work together to learn and are responsible for their teammates’ learning as well as their own” (p. 3). Slavin provides four principal students team learning methods, including “student team-achievement divisions (STAD)

(most appropriate for teaching well-defined objectives with single right answers, and every student must know the material)” (p. 4); “teams-games-tournaments (TGT)

(teammates help one another, study the worksheets, and explain problems to one another)” (p. 4); “team assisted individualization (TAI) (team members work on different units, and teammates check each other’s work against answer sheets and help one another with any problems)” (pp. 4-5); and “cooperative integrated reading and composition

(CIRC) (students follow a sequence of teacher instruction, work in teams to understand the main idea and master other comprehension skills)” (p. 5).

Jigsaw is another cooperative learning methods (Slavin, 1990). Slavin provides some activities for students in jigsaw cooperative learning methods. Slavin suggests that

( a) students are assigned to a separate section of academic work, “each member reads his or her section, members of different teams having study the same sections meet in expert groups to discuss their sections, students return to their teams and take turns teaching their teammates about their sections”; or b) all students read a common book chapter,

“each receives a topic on which to become an expert, students of the same topics meet in expert groups to discuss them, then return to their teams to teach what they have learned to their teammates” (p. 10)), group investigation (students in “groups choose topics from a unit that the entire class studied, they break these topics into individual tasks, they carry 118 out the activities necessary to prepare group reports, and each group presents its findings to the entire class”(p. 10)), and “learning together (students work together in four-to-five member groups to achieve a group goal, students must show that they have mastered the material, and discuss how well their groups are working to achieve their goals” (p. 111)).

According to Slavin (1990), cooperative learning methods differ in many ways

(six principal characteristics), including group goals; “individual accountability (team’s success depends on the individual learning of all members”; accountability means team members teach one another and make sure that every member on the team is ready for a quiz or task without teammate help); “equal opportunities for success (students contribute to their teams by improving on their own past performance)”; “team competition; task specialization; and adaptation to individual needs” (p. 12). Slavin notes that cooperative learning methods can be instructionally “effective means of increasing students’ achievement when they use group goals and individual accountability” (p. 32); and that cooperative learning can be “effective form of classroom organization for accelerating student achievement” (p. 33).

Slavin (1990) concludes that cooperative learning strategies improve student’s achievement; that they have positively influence achievement outcomes. Slavin asserts that “for any desired outcome of schooling, administer a cooperative learning treatment, and about two-thirds of the time there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p. 53). Slavin

(1990) remarks that “A great deal is changed when an instructor adopts cooperative 119 learning: the classroom incentive and task structures, feedback systems, and authority systems and the instructor’s role all change substantially” (p. 53).

According to Slavin (1990), jigsaw methods of cooperative learning allow students to learn sections through listening carefully to their teammates. Slavin asserts that “students are motivated to support and show interest in one another’s work” (p. 10).

Forms of cooperative learning. There are several forms of cooperative learning.

According to Slavin (1990), group discussion and group projects are oldest and universally used forms of cooperative learning. Slavin notes that “most science instructors use cooperative lab groups; and many social studies and English instructors use discussion or project groups” (p. 112).

Learning together. According to Slavin (1990), one of the cooperative learning methods is learning together. Learning together emphasizes four elements. These elements include a) “face-to-face interaction”; b) “positive interdependence”; c)

“individual accountability”; and d) “interpersonal and small-group skills” (Slavin, 1990, p. 111).

Discussion groups. There are several tasks in setting up discussion group.

Slavin (1990) provides the main tasks in setting up a discussion group. These tasks include a) “making sure each group member participates”, b) “selecting a leader for the discussion group”, c) “having each member write an opinion or an idea before the group starts discussing”, d) “making members experts on some part of the topic”, e) “having members do research on their area of expertise”, and f) “setting the aim/ objective for the 120 discussion” (pp. 112-113). Slavin suggests ways to break the reports into parts that different members write, including Group Investigation, Co-op, and Jigsaw.

Group projects. There are several principles for group projects. Slavin (1990) provides the basic principles for group projects. These basic principles include a) “get everyone to participate”, b) “do not allow one or two members in the group to take all responsibility”, c) “select a leader”, d) “give each member a specific part of the task or report to write or present to the class”, and e) “allow the group to divide the group project into parts” (Slavin, 1990, p. 113). Slavin suggests the use of Group Investigation and Co- op for complementing more group projects. According to Fitz-Gibbon (1996), “Projects are motivating, allowing deep reflection and painstaking effort, and provide experience of planning and meeting deadlines-not to mention gaining help from others; projects allow integration of knowledge, understanding and skills” (p. 88).

Informal cooperative methods. Slavin (1990) asserts that many instructors weave cooperative activities into their direct lessons or use them when presenting lessons in STAD, TGT, or other cooperative techniques. These informal cooperative activities include a) “spontaneous group discussion (ask students in groups to discuss what something means, why something works, or how a problem might best be solved to complement a direct lesson)” , b) “numbered heads together (a variant of group discussion-have only one member represent the group but not informing the group in advance whom its representative will be)”, c) “team product (have student teams, design a better government, list possible solutions to a social problem, or analyze a poem, assign team members specific roles or individual areas of responsibility)”, d) “cooperative 121 review (students groups make up review questions, take turns asking the other groups the questions, get a point for a correct answer, another group gets a point if it can add any important information to the answer)”, and e) “think-pair-share (students sit in pairs within their teams, the teacher poses questions to the class, students are instructed to think of the answer on their own, then to pair with their partners to reach consensus on an answer”, and then “to share their agreed upon answers with the rest of the class)” (Slavin,

1990, pp. 113-114).

Elements in a cooperative classroom management. Slavin (1990) describes classroom management methods for cooperative learning. These methods include group- based positive reward (theory) and management techniques. For group-based positive reward, Slavin (1990) suggests that instructors should pay attention to undesired behaviors; “give special recognition (that is specific, public, and recorded) to the team”;

“define clear expectations, necessary behaviors (quickly coming to full, quiet attention whenever the instructor asks)”, and “appreciated behaviors (extra peer helping, cooperation with teammates, and attention to the needs, opinions, and desires of others)”

(p. 116). For management techniques, Slavin (1990) discusses a) the zero-noise signal (a

“signal to students to stop talking, give their full attention to the instructor, and keep their hands and bodies still)”; b) “group praise (shapes the class, establishes the norms for the class, students learn the valued behaviors and receives special recognition for exhibiting them)”; c) “special-recognition bulletin (a chart or poster to record special-recognition points on a positive comment, students are motivated, work hard, encourage each other toward desired behavior)”; d) “special-recognition ceremony (each week the instructor 122 and students hold a brief and very important recognition ceremony to recognize outstanding teams and individuals)”; and e) “class or team fun time (choose a fun- activity, earn a certain number of points, provide a visible measure of how the class is progressing toward the class reward)” (p. 118). Slavin asserts that effectiveness of zero- noise signal a) will depend on the effectiveness of the positive group reward, and b) depends a great deal on the way special recognition is given (Slavin, 1990).

Strengths for cooperative learning. There are several studies (Felder & Brent,

2000, 2003, 2007) examine the benefits of cooperative learning. For example, Felder and

Brent (2000) provide some of these benefits, including improved a) “student- instructor and student-student interactions”; b) “information retention, grades”; c) “higher-level thinking skills”; d) “attitudes toward subject, motivation to learn it”; e) “teamwork, interpersonal skills”; f) “communication skills”; g) “understanding of personal environment”; h) “self-esteem, lower level of anxiety (due to less emphasis on competition)”; and i) “race, gender relations (if cooperative learning is implemented carefully)”; and j) “far fewer (and better) papers to grade” (p. 23).

Felder and Brent (2007) note that instructor who perfectly uses cooperative learning in their classes help in preparing their students for their professional careers.

Felder and Brent find “that a combination of cooperation and competition facilitates motivation, enjoyment, and performance of participants; that the most significant benefits are to the students whose outcome above content-driven and application-based objectives” (Felder & Brent, 2007, p. 82). The greatest benefit is obtained if the implementation follows the principles; the novices will benefit from their more 123 experienced mates and the experienced student will benefit from teaching others (Felder

& Brent, 2003).

Felder and Brent (2000) provide some effect size of meta-analysis of studies of small-group learning in college science, technology, engineering, and mathematics courses. Here are the effect sizes they reported: a) “positive effect (d = 0.51) on students’ achievement”, b) “positive effect (d = 0.46) on students’ persistence”, and c) “positive effect (d = 0.55) on students’ attitudes (far exceeds the average effect on affective outcome measures of d = 0.28 for classroom-based educational interventions)” (p. 23).

Barriers for cooperative learning. There are several barriers for cooperative learning. Felder and Brent (2007) report that in cooperative learning instructors have confrontations from their students, including brilliant students, teammates that are slow, and weak students; instructors redress patient-centered care (Mead & Bower, 2000); and instructors educate their students on quality improvement and informatics as disciplines

(Greiner & Knebel, 2003).

Implication for instructors. Cooperative learning has several implications for instructors. Several studies (Chiu, 2004; Felder & Brent, 2007) on cooperative learning provide implications for instructors. For example, Chiu (2004) identifies that student’s actions and instructor’s action enhance students’ future problem solving skills. Chiu notes that students sometimes solve problems diligently and productively; that instructor should recognize students showing little progress; and that instructor should evaluate students’ work, provide minimal content supports, and use guidelines to better students’ performance (Chiu, 2004). Chiu concludes that student did not request help when they 124 needed it, and that instructors should monitor their students work and intervene as necessary.

Felder and Brent (2007) suggest that instructor should group students into teams rather than allowing students select their own teammates; instructor should promote positive interdependence; instructor should create opportunity for individual accountability; and instructor should help students to develop teamwork skills (pp. 7-9).

The next section is on classroom assessment.

Classroom Assessment

In this section, reflective journal and peer assessment have been discussed.

Strengths, challenges, and roles of instructor and students have been provided.

According to Murphy (1996), integrating learning and assessment helps instructors to ensure that all students’ achievements and needs are identified and put into the instructors’ plan. Murphy notes that “assessment practice corresponds to teaching practice; and that evidence from assessment helps to illuminate some of the characteristic ways students make sense of situations” (p. 174). Murphy provides the features of tasks identified to affect students’ performance significantly. These tasks features are “the content, the context, task cues, and mode of presentation, the mode of operation, the mode of response, and the openness of the task” (p. 179). Murphy believes that “there is a need to understand how to do assessment that is consistent with current constructivist and sociocultural perspectives of learning”; that “individual rates of progress differ; and that progress is not linear”; that “students’ newly acquired knowledge may disrupt existing knowledge” (p. 192). 125

Assessment as objective tests, as Shepard (2000) pointed out, supports a model of education using a social efficiency curriculum and behaviorist theory, but does not support the principles of constructivism that currently provide guidance to developing student-centered learning activities (Shepard, 2000). Shepard notes “assessment can be used as a part of instruction to support and enhance learning, and also can be used to give grades or to satisfy the accountability demands of an external authority” (p. 4).

Peer assessment. Several studies (Asgari & Dall’Alba, 2011; Hughes, 2011;

Lutze-Mann, 2014; Strijbos et al., 2010; Topping, 1998) define and critique peer assessment. According to Lutze-Mann (2014), peer assessment refers to “the assessment of students’ work by other students of equal status” (p. 1). In other words, students assess each other’s work. However, there are two forms of assessments (Hughes, 2011) namely, criterion assessment and standards assessment. For the students to assess each other’s work correctly, they need to understand the terms standards and criteria. Hughes notes that the terms are confusing and provides a brief description about them. Hughes

(2010) notes, “Criteria are descriptive, whereas standards are judgmental” (p. 3).

Topping (1998) refers to peer assessment as “an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of learning of peers of similar status” (p. 250). Topping notes that there are various definitions; that they are confusing and needs careful study. Topping provides the variables that bring the confusions. Some of these variables include the curriculum area, objectives, focus, products, staff assessment, official weights, directions, privacy, and 126 contact. Others include “year, ability, constellation assessors, constellation assessed, place, time, requirement, and reward” (p. 252).

Topping (1998) provides general organizational factors leading to successful implementation of peer assessment, including a) “clarifying expectations, objectives, and acceptability”; b) “matching participants and arranging contact”; c) “developing and clarifying assessment criteria”; d) “providing quality training”; e) “specifying activities”; f) “monitoring the process and coaching”; g) “moderating reliability and validity”; and h)

“evaluating and promoting feedback” (pp. 265-267). Topping (1998) notes, “peer assessment of professional skills provides adequate reliability, but shows overall outcome; and the data are limited” (p. 264). Topping (1998) indicates that “observational schedules have gains in peer assessment of professional skills whereas follow-up through self-monitoring has not” (p. 264).

Topping (1998) maintains that peer assessment of writing, marking, grading, and testing were positively formatively related to students’ achievement and attitudes.

Topping states that there is limited evidence for peer assessment of the skills in presentation, group work or project work, and professional work. Topping suggests that

“future research in peer assessment should consider a more critical review, a best- evidence synthesis, a meta-analysis, student characteristics, and research design” (p.

268). Topping further suggests that future instructors who will embark on peer assessment should carefully consider the advantages and disadvantages of their personal and local context. Topping recommends that instructors may employ peer assessment through grades and of writing when they want quantity and quality results. Topping 127 concludes that “peer assessment well organized, delivered, and monitored with care can improve cognitive, social, affective, transferable skill, and systemic domains” (p. 269).

Strijbos et al. (2010) conduct a study to investigate the ways feedback content and the competence levels of the sender affect feedback perceptions and performance.

Strijbos et al. maintain that feedback from a low competent peer is effective, whereas the results from the literature show the opposite. Strijbos et al. suggest that instructors should involve low competent students in a peer assessment exercise with guidelines.

Asgari and Dall’Alba (2011) conducted a study to improve group functioning in solving realistic problems. Asgari and Dall’Alba used a combination of a) team skill training prior to group work, b) peer and self-assessment to evaluate contribution to group work, and c) instructor-designed versus student-designed problems (Asgari &

Dall’Alba, 2011). Asgari and Dall’Alba found that some students were uncomfortable with peer assessment and also with potential bias involved when their peers assessed them.

Strengths of using peer assessment. Peer assessment has several benefits (Asgari

& Dall’Alba, 2011; Cvetkovic, 2013; Doerry & Palmer, 2011; Lutze-Mann, 2014). For example, Lutze-Mann (2014) notes that peer assessment can engage students in the learning process, and develop their reflective, evaluative, and generic skills. Lutze-Mann provides some of the skills, including working cooperatively, thinking critically, and giving constructive feedback. Others include managing one’s own learning autonomy, developing interpersonal skills and awareness of group dynamics. Furthermore, Lutze-

Mann notes that peer assessment is fair, valid, and reliable when structured marked 128 schemes are used; that it is a positive experience for students; that it promotes a sense of fairness; and that it makes students become active agents in assessment procedures (pp.

1-2).

According to Asgari and Dall’Alba (2011), team skill training, peer assessments, and moderation marks improve students’ contribution to group work; and students have preference to design their own problems. Cvetkovic (2013) conducts an evaluation study to determine the effectiveness and efficiency of self- and peer- assessment on student team-based projects in biomedical engineering curriculum. Cvetkovic founds that the peer-blind self- and peer- assessment methods lead to high discrepancy between self and team ratings; however, the face-to-face lack discrepancy problem and being more accurate and effective. Cvetkovic’s result shows that the self and peer assessment improved the cooperative learning; that the peer-blind and face-to-face methods had both strengths and weaknesses.

Doerry and Palmer (2011) develop a system that has high efficacy and fidelity in mapping internal team dynamics and individual performance using an integrated structured task reports and anonymous peer evaluations. Doerry and Palmer provide the following best practices. These practices include a) “focusing on quantitative measures”; b) “anonymity of peer evaluations”; c) “association with specific deliverables”; d) “peer evaluations should be used as direct adjustment factors in calculating grades”; e)

“integration of multiple measures”; and f) “low overhead for students and instructor”

(Doerry & Palmer, 2011, p. 15). Doerry and Palmer conclude that peer evaluation and 129 task tracking technology showed a significant reduction in managing the team evaluation process for students and instructor (Doerry & Palmer, 2011).

Challenges of peer assessment. Using peer assessment technique poses several challenges (Lutze-Mann, 2014). For example, Lutze-Mann (2014) claims that peer assessment may not contribute to the summative grade of students. Students do not value the importance of the learning. They feel reluctant to participate in the learning. Lutze-

Mann asserts that peer assessment raises social tension and issues of loyalty. Students may criticize their peers for scoring them low marks. Lutze-Mann maintains that students doubt the credibility of the peer assessment results; that peers may award higher marks than instructor.

The role for instructor. According to Topping (1998), instructor should assess group work, and allow the group to mark its members. For each group, each member in the group should assign marks for every member on the basis of the number of contributions they made in the group work. Instructor should develop peer assessment criteria with the students. Instructor should provide useful feedback. Instructor should create an environment for a build in back-feedback. This build in back-feedback gives opportunity to students to respond to the assessment. Instructor should let students know the reasons of involving them in the assessment and explain to them the benefits.

Instructor should make the peer assessment process anonymous. Instructor should form the teams. Instructor should keep the team on track.

The role of a student. According to Lutze-Mann (2014), students are active students and assessors. They are autonomous of their learning. They are critical 130 reflectors. They are problem solvers. They are responsible for their own decisions.

Students should contribute to the team’s work. Students should interact with teammates.

Students should keep the team on track. Students should expect quality work. Students should have relevant knowledge, skill, and abilities. The next section is on group facilitation.

Group Facilitation Theories and Models for Practice

This section begins with kinds of facilitator authority. It discusses authority and authoritarianism, direct confusion of three kinds of authority, and need for authority. It provides and discusses various types of learning.

Tutelary Authority as Initiation

According to Heron (1993), tutelary authority replaces the old idea of cognitive authority and is much more sophisticated notion. Heron defines tutelary authority as

“mastery of some body of knowledge and skill and of appropriate methods for passing it on”, “effective communication to students through the written and spoken word and other presentations”, “competent care for students and guardianship of their needs and interests” (p. 17). In relation to the subject, Heron (1993) notes that facilitators are

“intellectually competent in it, and bring emotional, interpersonal, political, spiritual and other competences to bear upon their attitude to and presentation of it” (p. 17). “They have a holistic grasp of the subject and can reveal it in a way that shows its interconnections with the all aspects of the person and with other interdependent subjects” (p. 17). In relation to procedures, Heron (1993) notes that facilitators have a much more expanded repertoire than many direct instructors in higher education, who 131 have a limited range of methods and have received little or no training in the area. Heron stress that being knowledgeable about diverse learning methods and skilled in their facilitation is essential for honoring autonomy and holism in students (p. 17). Heron provides various learning strategies and discusses the issues involved in them.

Open learning. According to Heron (1993), there is “a great emphasis on the provision of open learning materials” (p. 17). One provision Heron notes was “systems and packages of information and exercises” (p. 17). Another one was “words and graphics that are presented in a way that takes account of the self-pacing” (p. 17). The last one was self-monitoring student.

Active learning. According to Heron (1993), active learning receives much attention. The emphasis are on the “design and facilitation of holistic, participative methods- games, simulations, role plays, and a whole range of structured activities” that involve students “in self-directing action and reflection, in affective and interpersonal transactions, in perceptual and imaginal processes” (p. 17). The “facilitator uses the experiential learning cycle in various formats: this grounds learning in personal experience, and releases learning as reflection on that experience” (p. 17).

Real learning. According to Heron (1993), real learning involves “projects, field-work, placements and inquiry outside the classroom, case studies, problem-oriented learning” (p. 17). Heron stresses that all these are important approaches to the learning process. Heron asserts that real learning is “dynamically related to what is going on in the real world” (p. 18). 132

Peer learning. According to Heron (1993), peer learning provides “the autonomy for the student who needs the supportive context of other autonomous students” (p. 18). Heron provides various importance of the peer learning group. These include “student co-operation in teaching and learning, experience and reflection, practice and feedback, problem-solving and decision-making, interpersonal process, self and peer assessment” (p. 18).

Multi-stranded curriculum. According to Heron (1993), the curriculum is

“holistic and multi-stranded”. Multi-stranded curriculum “means several different and related things”. These include “the main subject on the curriculum” and “complementary minor subjects”. The facilitator teaches “each subject in a way that shows its interconnections with the whole person and with other interdependent subjects”. The within- subject active learning methods comprise of the whole person. Active learning methods may empower learning and evoke deep inner resources. The within- subject active learning methods bring out its impact on different aspects of the student. The within- subject active learning methods have an impact on its interdependence with other subjects. Active learning “activities in the classroom are not to do with the formal subject but to do with the self and others in ways that involve various aspects of the whole person” (p. 18).

Contract learning. According to Heron (1993), in contract learning the students are given opportunity to plan their own program of learning. The students participate in assessment of learning using “collaborative contracts and collaborative assessment with 133 the facilitator. This item overlaps with the facilitator’s exercise of political authority” (p.

18).

Resource consultancy. According to Heron (1993), the level of the new teaching approach has significantly decreased as compared with the traditional approach. In resource consultancy, the facilitator turns much more a resource and consultant. The facilitator is always present when needed for self-direction. The facilitator “clarifies guides, discusses, and supports active student” (p. 18).

Guardianship. According to Heron (1993), in guardianship the facilitator cares for and watches over students’ needs and interests. The facilitator alerts the students to

“unexplored possibilities, to new issues of excitement, interest, and concern”. The facilitator reminds students “of issues discussed, of commitments made and contracts agreed” (p. 18).

According to Heron (1993), the main problem is with the holistic teaching and contract learning. Heron asserts that holistic teaching and contract learning can be applied to students who come from a very non-autonomous and non-holistic educational background, especially those students who move from secondary to tertiary education.

Heron believes that when students start early on in a course using learning contracts to plan their own learning to a significant degree, they are likely to do so in terms of the traditional learning methods they have brought with them. However, Heron suggests that when facilitators are going to initiate students into holistic methods, they have to plan a lot of the learning until students have internalized these methods and can manage them autonomously (p. 18). 134

Heron (1993) maintains that the tension between autonomy and holism in learning is a major issue in the educational revolution. Heron notes that many instructors are moving forward to use learning contracts. Heron notes that learning contracts enhance student autonomy without considering whether the resultant learning process is holistic

(p. 19).

Political Authority as Initiation

A political authority has three decision modes. According to Heron (1993), political authority refers to as the facilitator’s “exercise of educational decision-making with respect to the content, methods and timing of learning and teaching” (p. 19). Heron reports that a crucial shift is taking place in the use of this kind of authority in education.

Heron notes that concept of political authority needs to undergo a complete redefinition.

Heron believes that the full implication of this may be fully articulated and grasped.

Heron asserts that “the shift is from deciding in terms of just one decision-mode to

‘deciding which decision-mode to use’” (p. 20). Thus, Heron notes a vast increase in facilitator flexibility and enabling power. Heron explains the decision-mode about one of the three basic ways of making educational decisions relative to the students. Heron suggests that facilitator “can make decisions with students, or facilitator can give students space to make decisions on their own” (p. 20). Heron provides these three decision- modes, including “direction, negotiation and delegation; and these hierarchies, including co-operation and autonomy” (Heron, 1993, pp. 19-20).

According to Heron (1993), the decisions that should be taken are focused on the basic elements of the learning process. These elements include learning objectives, the 135 topics to be learnt, the pacing and progression of learning, the teaching and learning, methods, the human and physical resources to be used, the criteria, and methods of assessment” (p. 20). Heron suggests that when topics are combined with pacing and progression in one item of the course program, or timetable, the five main areas for educational decision-making are obtained (p. 20).

The Three Decision-Modes and Five Elements of Learning Process

There are three decision modes. According to Heron (1993), these decision modes are direction, negotiation, and delegation. The five elements of learning are objectives, the program, methods, resources and assessment. Heron provides and describes these decision modes as follows:

Direction. According to Heron (1993), direction means that a facilitator exercises educational power unilaterally. The facilitator decides everything in the five areas for the students. Facilitator decides, without in any way consulting students, what they will learn, when they will learn it, how they will learn it and with what resources, and by unilateral assessment facilitator decides whether they have learnt it. Students’ performance with respect to objectives, the program, methods, resources and assessment is entirely subordinate to the facilitator’s commands. Their self-direction can only be exercised in a minimal way within the complete framework of learning which facilitator prescribes (p. 20).

Negotiation. According to Heron (1993), negotiation means that facilitator exercises educational power bilaterally. Facilitator decides everything with the students.

Facilitator’s “decision mode is co-operative” (p. 20). Facilitator takes into account 136 student self-direction, “consult them about everything and seek to each agreement in setting up mutually acceptable contracts about objectives, the program, methods, resources and assessment” (p. 21). Heron notes that “assessment will be collaborative, involving a negotiation between students’ self-assessments and facilitator’s assessments of their work” (p. 21).

Delegation. According to Heron (1993), delegation means that facilitator gives space for the unilateral exercise of educational power by the students themselves.

Facilitators have declared themselves redundancy, and students are self-determining with respect to their objectives, program, methods, resources and assessment. Heron notes that everything, including assessment, is self and peer determined in autonomous student groups (p. 21).

According to Heron (1993), each of these decision-modes form is unacceptable for running any course in higher education. Heron asserts that empowering students in a course. Empowering students requires the use of direction, negotiation and delegation in differing serial and concurrent ways as the course progresses (p. 21).

Charismatic Authority as Initiation

According to Heron (1993), the third aspect of authority, charismatic authority, refers to “a facilitator’s influence on students and the learning process by virtue of their presence, style and manner, through their personal delivery of tutelary and political authority” (p. 31). Heron notes that “charismatic facilitators empower people directly by the presence of their own inner empowerment” (p. 31). Heron provides the meaning as eliciting the emergence of the autonomy and wholeness of students through a behavioral 137 manner, a timing and tone of voice, choice of language, and ideas that proceed from the autonomy and wholeness of the facilitator. Heron asserts that this expressive presence

“generates self-confidence and self-esteem in students, and enhances their motivation toward independence and integration of being” (p. 31).

The role of the facilitator. According to Heron (1989), the roles of a facilitator are classified in the forms as educational alienation, cultural restrictions, and psychological defensiveness. Heron provides and describes the roles of a facilitator in the classified forms as follows. Educational alienation includes holistic course design and effective switching. Cultural restrictions include consciousness-raising and interruption. Psychological defensiveness includes culture-setting, permission-giving, growth ground-rules, honoring choice, conceptual orientation, confronting, emotional switching, bursting into laughter, lowering the cathartic threshold, individual work, and group autonomy. Heron identifies two fundamental and complementary challenges, including “the challenge of shaping a new kind of society” (locally and globally), and the challenge of living aware of a multi-dimensional universe (p. 40).

Aspects of task. According to Heron (1989), there are three aspects of task. The first aspect of task is planning. The second aspect is operating (structuring). The third but not the least aspect is meaning (knowledge and understanding).

Aspects of process. According to Heron, aspects of process include confronting, feeling and valuing (Heron, 1989). Patton (2002) argues that “How does one recognize a program process?” (p. 474). According to Patton (2002), “learning to identify and label program processes is a critical evaluation skill; process is referred to as a way of talking 138 about the common action that cuts across program activities, observed interactions, and program content” (p. 474). Patton asserts that a need to “describe the linkages, patterns, themes, experiences, content, or actual activities helps to understand the relationships between processes and outcomes” (p. 472); and to “interpret and judge the nature and quality of this process/outcomes connection” (p. 473). Patton suggests that the linkages are expressed as patterns, themes, experiences, content, actual activities, quotations, and program.

Skills. According to McKenzie (2014), students should study and practice these kinds of skills; they should be successful contributors to their community. These skills include “a) collaborate, b) problem solve, c) create products of value, d) practice conflict resolution, e) self-monitor their work performance, and f) learn from risk-taking regardless of the outcome” (McKenzie, 2014, para. 2). McKenzie notes that 20th century was industrial age and the skills relevant at that time were “alignment, standardization, consistency of behavior, and ability to follow directions” (para. 3). Labor laws were enacted “to free children from inappropriate working conditions” (para. 4). However, today 21st century skills are recommended. The children participate “in self-selected learning communities” (para. 4). Instructors participate “as facilitators, coaches, and mentors” (para. 4). Learning takes place everywhere.

Capturing the Processes in this Study

According to Patton (2002), a process consists of verb forms and noun forms. In this study, the verb forms include providing, working, employing, applying, and utilizing.

The noun forms include “patient-centered care, interdisciplinary teams, evidence-based 139 practice, quality improvement, and informatics” (Greiner & Knebel, 2003, p. 1). There are five competencies for the IOM standard. What do these different skills and knowledge has in common, and how can that commonality be expressed? In qualitative analysis, what language do people in the program use to describe what those skills and knowledge have in common? What language comes closest to capturing the essence of this particular process? Other processes identified as important in the implementation of a program are: a) encouraging and managing stress; b) sharing in group settings; c) examining professional activities, needs, and commitments; d) assuming responsibility for articulating personal needs; e) exchanging professional ideas and resources; and f) formally monitoring experiences, processes, changes, and impacts (Patton, 2002, pp. 475-

476).

Personal change. Patton (2002) argues about “How does one categorize changes in thoughts, feelings, and intentions about competences, skills, and processes?”

According to Patton (2002), there are changes in a person that affect the person’s work.

Patton notes that there are five kinds of changes that occur in a specific program, namely, a) “changes in skills”, b) “changes in attitudes”, c) “changes in feelings”, d) “changes in behaviors”, and e) “changes in knowledge” (Patton, 2002, p. 476). The next section is on participatory evaluation.

Participatory Evaluation

This section defines participation and participatory evaluation (PE). It provides forms, issues, principles, key elements, and the history of PE. It discusses participatory 140 action research, participatory culture, participatory observation, and appreciative inquiry method.

Participation. According to Whitmore (1998), participation refers to “as a positive activity in the democratic societies” (p. 1). Weimer (2014) notes that

“Participation can prepare students for discussion” that “can be more than a single, linear, question-answer exchange between an instructor and a student” (para. 2). Weimer provides several characteristics of participation that prepare students for discussion.

These characteristics include a) “asking better questions, more open-ended questions, and more provocative and stimulating questions” (para. 3); b) “encouraging students to respond to each other, making comments about each other’s comments, and speaking directly to each other”; c) “holding students to the topic” (para. 5); d) “starting with an instructor’s question and a single student’s answer that contains fresh ideas, offers different perspectives , draws on relevant experiences, or relates to course content”; and e) “using that answer to facilitate a mini-discussion before going to the next question”(para. 6). Weimer believes that “participation and discussion are ends of a continuum” (para. 7); “there is no clear designated point where an exchange is participation or discussion”, except at the extremes; and in “between they can morph partly or fully in and out of each other”. Weimer suggests that when “the exchanges between students are no longer moving in new or interesting directions”, the instructor should “ask another question or introduce a different topic” (para. 7). Weimer stresses that “participation provides students with practice and feedback”, “develops students’ discussion skills”, and improves students’ “classroom interaction” (para. 8). 141

Bart (2011) provides five teaching strategies for creating a participatory classroom environment. These tips include a) “getting to know your students (create a welcoming and collaborative spirit in the classroom, ask students to share something unique about themselves)” (para. 4), b) “inviting students to start some of the classes

(give students five-minute presentation on how the material in a course relates to the material in another course the student is taking)” (para. 5), c) “finding out “what’s news?”(i.e., ask students to connect events to course material)” (para. 6) d) “asking for a ticket to class (ask students to submit their assignment the following class)” (para. 7), and e) “building feedback into your course (ask students to do a self-assessment on how well they performed the task, and assess how the group perform as a whole)” (para. 8).

Shank (2013) provides six teaching strategies. These strategies include orienting courses around real-world problems, allowing students to discuss and introduce their own real-world problems, and using participatory strategies (case studies and situations from real-world practice). Other strategies include presenting theories and concepts within the context of application to real-world issues, using adult students as resources and experts where they have direct knowledge, and providing opportunities for students to share knowledge and experience (Shank, 2013).

Participatory evaluation. According to Tandon (1988), PE involves the elements of participation and evaluation. Tandon defines evaluation as a concept and practice in developing programs, plans and activities. These programs, plans, and activities are designed and implemented to promote development; and they are evaluated regularly and frequently. Tandon notes that there are varied meanings given to PE. For 142 example, Tandon refers to PE “as a process of action-reflection-action” (p. 5); and that

PE is “a methodology that makes evaluation an integral process of any planning; that implements a development initiative focusing on the people in it” (p. 8).

Tandon (1988) refers to PE as a process of individual and collective learning. It is an educational experience. It is a learning about one’s strengths, about one’s weaknesses; a learning about the way plans and programs get implemented; a learning about social processes and development outcomes; a learning about social reality and intervening in the same; and a learning about creation, development of organizations, ensuring their relevance, and longevity. It implies clarifying and rearticulating one’s vision and perspective about the development work one is involved in. This educational drive of PE methodology implies that various parties involved in a development program experience

PE as a learning process for themselves. The process is designed and structured so that it ensures learning (pp. 9-10). According to Tandon (1988), PE methodology is based on

“a world-view, a vision about human beings and their capacity and, on an interpretation of social reality” (p. 10).

Forms of participatory evaluation. According to Whitmore (1998), there are two principal forms of participatory. These forms are “practical participatory evaluation

(P-PE) and transformative participatory evaluation (T-PE)” (Whitmore, 1998, p. 1).

First, P-PE “is pragmatic and has its central function fostering evaluation use” (p. 1).

Second, T-PE is “based on emancipation and social-justice activism and focuses on the empowerment of oppressed groups” (p. 1). 143

Issues for consideration in participatory evaluation. According to Cousins and

Whitmore (1998), there are several challenges for participatory evaluators and interested people who engaged in participatory activities. Cousins and Whitmore (1998) provide and describe issues for consideration in PE. These issues include “power and its ramifications”, “ethics”, “participant selection”, “technical quality”, “cross-cultural issues”, “training” (p. 18), and “conditions enabling PE” (p. 19). Cousins and Whitmore suggest that “Credible answers to these issues will come only from sustained PE practice and particularly from practice that includes deliberate mechanisms for ongoing observation and reflection” (p. 19). Cousins and Whitmore believe that “both participatory evaluators and the participants with whom they work will report on their experiences, thus informing professional understanding of these important issues” (p. 19).

Tandon (1988) provides several shortcomings of participatory evaluation. These failures and shortcomings are the following: a) “lack of an effective organizational system”, b) “lack of effective training” (p. 66), c) “lack future plans”, d) “communication and interpersonal difficulties in core group (the question of centrality-marginality, sharing of information, lack of clarity about initiatives, mistrust)”, e) “plateau and slide- back (some areas of achievements of the past are being reopened as problematic)”, and f)

“lack of intensive work with youth and women” (p. 67). Tandon suggests the following areas for future considerations: a) need for redefining the objectives, b) need for integrating new program, and c) need for redesigning the structure (Tandon, 1988).

Principles of participatory evaluation. According to Burke (1998), there are several principles for participatory evaluation. Some of PE are the “evaluation must 144 involve and be useful to the program’s end users”; the “evaluation must be context- specific, rooted in the concerns, interests, and problems of the program’s end users”; the

“evaluation methodology respects and uses the knowledge and experience of the key holders”; and the “evaluation is not and cannot be disinterested” (p. 44). Others are the

“evaluation factors collective methods of knowledge generation”; the “evaluator

(facilitator) shares power with the stakeholders” (p. 44); and the “participatory evaluator continuously and critically examines his or her own attitudes, ideas, and behavior”

(Burke, 1998, p. 45).

Key elements of the participatory evaluation process. According to Burke

(1998), the key elements of PE process are a) the “process must be participatory, with the key stakeholders actively involved in decision making”; b) the “process must acknowledge and address inequities of power and voice among participating stakeholders”; c) the “process must be explicitly ‘political’” (p. 45); d) the “process should have multiple and varied approaches to codify data”; e) the “process should have an action component in order to be useful to the program’s end users”; f) the “process should explicitly aim to build capacity, especially evaluation capacity, so that stakeholders can control future evaluation processes”; and g) the “process must be educational” (Burke, 1998, p. 46).

History of participatory evaluation and debates. According to Brisolara

(1998), PE is “both held suspect and revered” (p. 25). A “PE model combines ideas and practices that are debated in the field with those that have been formulated several years of contentiousness” (p. 25). Brisolara provides several assumptions, including a) “the 145 process of evaluation (and what is learned throughout the process) is an important outcome of the project”; b) “stakeholders hold critical, sometimes elusive, knowledge about the dynamics of the program and the needs that the program is intended to fulfill”; c) “stakeholders can make valuable contributions at various stages of the evaluation process”; d) “dialogue among diverse voices is a means of approaching a holistic understanding of a program”; e) “the evaluator assumes non-direct roles (facilitator, change agent, educator) in the interest of promoting collaboration”; and f) “the research process commits to actively applying what is learned in the service of people affected by the program” (p. 25). Brisolara provides the two models of PE, namely P-PE and T-PE.

According to Brisolara (1998), the two models share commitment to participation but they differ in alignment “on a continuum that ranges from practical (utilization-focused, within the status quo) to transformative (action-oriented, ideological)” (p. 26).

Participatory Action Research

Participatory action research (PAR) involves the moves of writing the self (in autobiography), researching up the social hierarchy, and combining research with action

(Hughes, 2003). Hughes describes ‘researching up’ as providing results that contribute to the struggles against power-holders to transform powerful institutions. At the end of the research process, these results are shared with trade unions and social movements.

Hughes notes that researching up and PAR has funding problems except they fit public policy. Ideally, the researcher or the funder does not name the research problem, but the research participants do together (p. 103). Thus, the requirement is that either a specialist agency connects people or organizations with research needs and researchers, or the 146 researcher needed to participate in community organizations (pp. 103-104). PAR involves the researcher working in partnership with people; combining education with research, and research critique. PAR engages actively, and connects people who have the same ideology (p. 104).

Participatory Culture

According to Jenkins (2009), a participatory culture has moved “the focus of literacy from individual expression to community involvement; the new literacies almost all involve social skills developed through collaboration and networking” (p. xiii).

Jenkins stresses that “skills build on the foundation of direct literacy and research, technical, and critical-analysis skills are learned in the classroom” (p. xiii). Jenkins provides “the new skills”, including “play”; “performance”; “simulation”;

“appropriation”; “multitasking”; “distributed cognition”; “collective intelligence”;

“judgment”; “transmedia navigation”; “networking”; and “negotiation” (p. xiv).

Participatory Observation and Sense-Making

According to De Vries (2005), one of the emerging design methodologies is called participatory observation. Participatory observation is a direct design. In the participatory observation, a methodologist is “part of the design team and observes what has happened” (p. 51). This is an easiest way of taking the designers away from their field to a different field. The designers will then solve practical design problem from their experience (p. 51). De Vries notes that a design problem can be solved using the

“combination of procedural and conceptual knowledge” (p. 60). De Vries maintains that this knowledge becomes “knowledge about the physical nature, knowledge about the 147 functional nature, knowledge about the relationship between physical and functional nature, and knowledge of sequence of actions (knowledge of processes)” (p. 60).

Participatory sense-making is a process whereby individuals connect and collaboratively understand new ideas, exceeding their own limitations (De Jaegher & Di Paolo, 2007).

Characteristics of participatory evaluation. Tandon (1988) provides the characteristics of PE as the following. “The central characteristic of PE is that people involve in a given development program or organization, both as implementers and as beneficiaries, start participating in, and take charge of the evaluation efforts”. “The control over the process of evaluation remains in the hands of those who are developing and implementing and benefitting from the programs”. The “evaluation serves the interest of furthering the benefits and improving the programs and organizations involved in development at the base, and not those who are intending to control it from the top”.

Tandon reports that PE “is an attempt at redefining and reaffirming development as a

‘bottom-up’, ‘people-centered’, ‘people-controlled’ process” (p. 8).

According to Tandon (1988), the PE methodology depends on participatory models of development (p. 11). The emphasis on an evaluation exercise is important on the ‘field’ and requires active involvement of local people who are may be the beneficiaries (p. 12). A PE intervention has a process that is ‘institution-focused’. It requires the active involvement of the field staff, senior members of the organization.

These senior members include the governing body members and other key parties in its environment (p. 13). 148

The scope and depth of the participatory evaluation. According to Tandon

(1988), PE is for people who are interested in improving their practice and sharpening their vision. PE is for people who are developing and are interested in evaluation activities. PE is a process that evaluates people’s activities, initiatives, plans, and outcomes. PE is a collective process of reflection and planning. PE is an educational experience for those involved in it (p. 14).

Tandon (1988) outlines some key steps in the process of planning and conducting

PE. Some of the key steps include “setting objectives (frames of reference)”,

“identifying parameters and information needed” (p. 16), and “identifying sources of information” (p. 17). Other key steps are “developing methods to obtain that information and data collection”, “analyzing data” (p. 17), “creating future scenarios”, and “evolving action plans” (p. 18).

Appreciative Inquiry (AI) as a Method for Participatory Change

When conducting participatory research, both quantitative and qualitative research instruments may be used. According to Shuayb (2014), quantitative tools facilitate the identification of positive experiences and visions of desired population in a short time. Shuayb employed quantitative and qualitative tools to measure “students’, instructors’, and principals’ needs, positive experiences, and visions” (p. 302). The quantitative survey as “open-ended questions, helped identify the positive experiences in the schools in a short period of time and without interrupting the school’s schedule”;

“helped identify the visions on how to make the school an even more effective place”

(Shuayb, 2014, p. 302). Shuayb follows-up with “in-depth individual and focus group 149 interviews” as qualitative data that have “helped shed more light on the various stakeholders” (p. 302).

Shuayb (2014) explains that researchers can reflect on the overall research process in each of the schools, assist the participant, and engage in evaluating the overall activity. Shuayb notes that focusing on positive experiences can enhance stakeholders’ participation and engagement in collaborative communication. Shuayb notes that training students as researchers was a way of empowering them. Shuayb notes that lack of commitment and political factors may block the implementation of AI, and then render change impossible. Shuayb claims that successful implementation of AI “requires commitment from the decision makers who have the authority and power to implement the visions of the stakeholders” (Shuayb, 2014, p. 306).

Beginning Teachers Becoming Professionals

Ginns et al. (2001) found that beginning teachers found journal reflections useful; they had learnt from it, they had improved “their knowledge about their classroom practice, their school life and about the use of action research” (p. 129). They found that

“the beginning teachers benefited greatly from the participatory, collaborative, social and reflexive aspects of participatory action research” (p. 129). They argue that the direct

“induction methods tend to reproduce the profession, rather than use critical reflection that can lead to change, progress and reflection on practice” (p. 129).

Program effectiveness. Morrison et al. (2007) provide some details on program effectiveness. Morrison et al. note that “if all learners accomplish all objectives, the effectiveness of the program would be excellent;” that the instructor “must have 150 previously decided the level at which the program would be accepted as effective;” and that “attainment of the 80% level by at least 80% of the learners in a class could be acceptable as a highly effective program” (p. 321). Morrison et al. provide reasons why no one can reach the absolute standard of mastery or competency (100%) in all instructional situations. These reasons include a) individual differences among learners, and b) a designer’s inability to design ideal learning experiences (p. 321). The next section is about methodology literature.

Methodological Literature

This section discusses triangulation and its aspects, and roles of qualitative research. According to Creswell and Plano Clark (2011), a methodological element of participatory worldview is participatory. Creswell and Plano Clark stress that in the methodological participatory worldview, the researchers “involve participants in all stages of the research and engage in cyclical reviews of results” (p. 42).

According to Slavin (1990), many teachers use “a mix of strategies” (p. 112). For example, a science teacher might use TGT or STAD to teach science information and vocabulary, Jigsaw II for expository material about science, and lab groups for lab work, all with the same teams. A social studies teacher might use TGT or STAD for geography and graph readings, Jigsaw for history, and discussion groups for contemporary social problems.

Rhetorical Literature

According to Creswell and Plano Clark (2011), a rhetorical element of participatory worldview is advocacy and change. Creswell and Plano Clark note in the 151 rhetorical participatory worldview, the researchers “use language that will help bring about change and advocate for participants” (p. 42).

Summary

This literature review chapter provides philosophical, theoretical and empirical lenses through which readers can reach a deeper understanding of the exploration and examination of participatory teaching and learning in the classroom settings. Gropper’s metatheory of instruction, instructional theories, principles, and strategies were discussed.

Learning theories, strategies, and knowledge and skills were also discussed. This chapter also includes learning styles, self-efficacy, and achievement variables. Classroom assessment, group facilitation, participatory evaluation, methodological literature, and rhetorical literature were discussed. Finally, summary of earlier works was provided.

The existing empirical findings were on instruction, achievement, self-concepts, and self-efficacy (Bloom, 1976; Dunn et al., 1989; Gregorc & Butler, 1984; Gropper,

1983; Silvernail, 1987); self-efficacy (Bandura 1977, 1982, 1986, 1997, 1993; Bandura &

Cervone 1983; Bandura & Locke 2003; Fitz-Gibbon, 1996; Ford, 1992; Reeve, 2005); and grouping and achievement (Bodner et al., 2014; Dahllöf, 1971; Felder & Brent, 2000;

Oakley et al., 2004; Slavin, 1990). These findings have some implications for curriculum

(Bergan, 1995; Bray & Rogers, 1995; Ludwigsen & Albright, 1994); for learning styles, preferences, and individual differences (Bray & Rogers, 1995; Ducette et al., 1996;

Knowles et al., 2012; McDaniel, 1995; Schanks, 1993; Trautwein et al., 2006); and for inter-professional teams (Chiu, 2004; DiGiovanni & McCarthy, in press; Felder & Brent,

2007; Moffic et al., 1983; Oakley et al., 2004; Okasha, 1997; Sheppard, 1992). 152

Achievement, instruction, and learning. According to Gropper (1983), achievement is a rising function of instruction (p. 44). Achievement should rise as a function of number of conditions treated, and a function of the closeness of the match between need and levels of attention delivered (p. 47); that correlations between achievement and instruction were most significant; that the relationship between instruction and achievement should hold (Gropper, 1983, p. 48). Gropper notes that when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Mastery learning begins with the notion that most students’ level of achievements increases if instruction is approached sensitively and systematically

(Bloom, 1976). Mastery learning is another effective way to improve both achievement and self-concept, as it is based on the assumption that all students can reach a high level of competence, if the right action is taken and enough time is allowed (Silvernail, 1987).

Self-efficacy. Bandura (1977), on self-efficacy, notes “that individual differences at one point in time can lead to choices that effectively change the actor’s life-space and magnify the impact of those personal characteristics” (p. 25). Bandura (1982) claims, the level of induced self-efficacy varies directly as the performance accomplishments and varies inversely as the emotional arousal. Bandura notes that the “level of perceived self- efficacy correlates positively with range of career options seriously considered and the degree of interest shown in them” (Bandura, 1982, p. 136). “Experiences that increase coping efficacy can diminish fear arousal and increase commerce with what was previously dreaded and avoided” (Bandura, 1982, p. 136). Bandura (1982) asserts, 153

“Knowledge of personal efficacy is not unrelated to perceived group efficacy; and that a collective efficacy is rooted in self-efficacy” (p. 142).

Bandura and Locke (2003) note that “the higher the perceived self-efficacy to fulfil educational requirements and occupational roles is, … the better they prepare themselves educationally for different occupational careers, and the greater is their staying power in challenging career pursuit” (p. 90). Bandura (1982) asserts that in any given instance both self-efficacy and outcome beliefs will best predict behavior; and that self-percepts of efficacy influence thought patterns, actions, and emotional arousal

(Bandura, 1982). Bandura and Cervone (1983) found that “the higher the self- dissatisfaction with a substandard performance and the stronger the perceived self- efficacy for goal attainment, the greater was the subsequent intensification of effort” (p.

1017).

Bandura (1977) founds that perceived self-efficacy and behavioral changes relate; that perceived self-efficacy is partly directly influence choice of activities and settings and partly directly coping efforts; and that the perceived self-efficacy increases as the efforts increase. Students of learning goal orientation tend to make positive attributions for success and sustain their self-efficacy for learning (Bandura, 1993). Empowering students through self-efficacy training increases their self-efficacy (Reeve, 2005).

Students’ perceived competence is increasingly with timely feedbacks, their perception of their own learning abilities improve (Bandura, 1997).

Grouping/team and achievement. Grouping has little or no bearing on students’ achievement (Dahllöf, 1971). Dahllöf stresses that achievement should be regarded as a 154 direct outcome of the grouping arrangements rather the actual teaching process, the general style of instruction, the teachers and their competence (p. 4). Felder and Brent

(2000) provide some effect sizes of meta-analysis of studies of small-group learning on students’ achievement was positive effect (d = 0.51); on students’ persistence was positive effect (d = 0.46); and on students’ attitude was positive effect (d = 0.55) for classroom-based educational interventions (p. 23). Dahllöf (1971) provides some methodological problems, including a) the marked ceiling effects in some of the tests, b) the tests intended to measure the effect of two years’ differential grouping in the experimental group are identical with the tests used as control variables in the control group, c) the same curriculum objectives, d) the same number of hours and lessons a week and a year, and e) in the traditional classroom type, the teacher addresses the whole class (pp. 18-21).

Bodner et al. (2014) note that cooperative learning may improve student achievement, enhance students’ self-esteem (p. 142); and that the most important element in cooperative learning is to create an interactive environment. Cooperative learning strategies are effective, provide wide outcomes, enhance achievement, and improve intergroup relations (Slavin, 1990). Slavin notes that cooperative learning methods can be instructionally effective means of increasing students’ achievement when they use

“group goals and individual accountability” (p. 32). Slavin (1990) concludes that cooperative learning strategies improve student’s achievement; that they have positively influence achievement outcomes. Slavin asserts that “for any desired outcome of schooling, administer a cooperative learning treatment, and about two-thirds of the time 155 there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p. 53). Cooperative learning has shown strong positive effects on almost every learning outcome (Oakley et al., 2004).

Self-concept, self-efficacy, learning styles, and achievement. Knowles et al.

(2012) note that individuals may differ in their approaches, strategies, and preferences as they learn activities; that those differences significantly improve learning; that understanding of those differences may help andragogy (adult learning) more effectively in practice. Academic self-concept can directly predict self-esteem and future academic achievement (Trautwein et al., 2006). Trautwein et al. note that separating students can inversely affect their self-concept that can also inversely affect their self-esteem and academic achievement. Trautwein et al. found that self-esteem cannot predict academic achievement. Working in teams requires students to engage in teamwork (working collaboratively), communication (communicating the work of the group to outsiders), and reasoning (reasoning to solve problems) processes (Schanks, 1993). Knowledge of learning style will improve students’ self-concept and achievement (Dunn et al., 1989;

Gregorc & Butler, 1984). Fitz-Gibbon (1996) notes, “behavior is a function of the person, the environment, and the interaction of person and environment” (p. 174). Ford

(1992) notes that achievement/competence is a function of motivation, capability beliefs

(skill), and context beliefs (responsive environment) (p. 123).

Implication for learning styles. Ducette et al. (1996) provide some assumptions of learning styles, including a) students enter a learning situation with a variety of skills, preferences, and capacities; b) the skills, preferences, and capacities affect their learning; 156 and c) matching learner and learning environment facilitates learning for the learner.

Ducette et al. note that instructor should design multiple learning experiences that allow multisensory preferences; and that instructors should use multiple approaches so that different learners must understand it. McDaniel (1995) notes, different training, theories, and working styles affect communication and accessibility. McDaniel found that both nurses and physical therapists complained that the other was inaccessible and would not communicate, suggesting differences in working styles. Bray and Rogers (1995) provide some challenges of learning styles, including a) different theories; b) different languages; c) different “practice styles”; d) different inaccessible providers; and e) different expectations for assessment and treatment (Bray & Rogers, 1995, p. 137).

Implications for IP team. Moffic, Brochstein, Blattstein, and Adams (1983) note that discipline of students had some influence. Moffic et al. (1983) assert that social work students are capable of executing roles in primary health care than psychiatric residents. Okasha (1997) notes, learning in teams jointly lead to professional socialization and peer respect and the community. Sheppard (1992) stresses that different linkage and collaboration between professionals; that different occupational culture or role expectations for inter-professional training should not be ignored. Enhancing collaboration, interventions should develop positive beliefs and high expectations among peers in interdisciplinary teamwork, solve problems of cooperation and clinical management, and develop new practice environment (Sheppard, 1992).

Chiu (2004) identifies that student’s actions and instructor’s action can improve students’ future problem solving. Chiu notes that students sometimes solve problems; 157 that instructor should monitor those students who were not making progress; and that instructor should evaluate students’ work, provide content support, and use guidelines to

“better students’ performance than other instructor behaviors” (p. 393). Chiu concludes that student failed to call for help; and that instructor should monitor their work and intervene as necessary. Felder and Brent (2007) suggest that instructor should form teams rather than permitting students to select their own teammates; instructor should promote positive interdependence; instructor should provide individual accountability; and instructor should help students to develop teamwork skills (pp. 7-9). Oakley et al.

(2004) recommend that three- and four-person teams are appropriate for effective teamwork. DiGiovanni and McCarthy (in press) note that “the task of revising and constructing curricula materials was in itself a team-building activity, tailored to address core competency areas for students” (p. 35). DiGiovanni and McCarthy (in press) observed that “students came in rating their skills as high, with the exception of two areas-informatics and quality improvement, in which students started with a much lower baseline” (p. 37). DiGiovanni and McCarthy suggest that “Classroom experiences should be project centered and grounded in major issues such as core competencies, with a greater focus on execution and less emphasis on the acquisition of content knowledge”

(p. 50). DiGiovanni and McCarthy conclude that “IPE program work involve significant collaboration, trust, and some sacrifice in order for such novelties to become sustainable”

(p. 50).

Implication for curriculum. Moore (2004) notes that designing instructional sequence helps students gain deep understanding. Moore concludes that implementing 158 appropriate learning strategies may guide students’ behavior to master the content materials. DiGiovanni and McCarthy (in press) observed that “students came in rating their skills as high, with the exception of two areas-informatics and quality improvement, in which students started with a much lower baseline” (p. 37). Bergan (1995) notes that curriculum should be critically examined for gaps between the school training and the marketplace needs. Ludwigsen and Albright (1994) propose that hospital training program in psychology should comprise graduate course work, internship, and in-service training. Ludwigsen and Albright recommend that certification in hospital practice, licensing and exempting, and lifelong learning should be provided (Ludwigsen &

Albright, 1994). Bray and Rogers (1995) note, training should provide more information about professionals training and the methods of evaluating and treating patients, and what professional can and cannot offer. Bray and Rogers recommend that the development of standards for training and fostering collaborative relationships should include a) negotiating language limitations; b) clarifying theories; c) ensuring trust; d) differences in time scheduling; and e) noting the competitiveness of the practice (Bray & Rogers, 1995, p. 137). 159

Chapter 3: Methodology

Introduction

This chapter presents the methodology used for conducting this study. The methodology explains the research design, the population, sampling plan, and sample size selection. It follows with the instrument, the data collection, and the data collection procedures. Furthermore, it describes data analysis procedures for phase 1-(the quantitative data), and for phase 2-(the qualitative data). Finally, it describes the pilot study.

Research Design

The research design was quasi-experimental nonequivalent control group design.

Figure 2. Showing quasi-experimental design- a nonequivalent control group

A nonequivalent control group design. A nonequivalent control group design involved an experimental (participatory) group and a control group (direct) group. Both groups were given a pre-survey and a post-survey. The two groups were not randomly assigned but were from intact classes. According to Cook and Campbell (1979), 160 nonequivalent means “that the expected values of at least one characteristic of the groups are not equal even in the absence of a treatment effect” (p. 148). Cook and Campbell note that understanding of the nature of the group nonequivalence implies understanding of the selection process and how it differs from being random.

According to Creswell (2014), Campbell and Stanley (1963), Cook and Campbell

(1979), nonequivalent control group design had threats to both internal and external validities. For internal validity, Campbell and Stanley (1963) note that interaction of selection and maturation is one source; and other possible threat to internal validity is regression. Campbell and Stanley also note that “interaction of testing and treatment” is one source of external validity; and other possible sources include “interaction of testing and treatment and reactive arrangements” (p. 40). According to Warner (2013),

ANCOVA is likely to be used in the following situations: a) using ANCOVA to correct for nonequivalence in subject characteristics (through adjusting “the group means for the

Y outcome variable to the values they might have had if means on the covariate (Xc) had been equal across groups” (p. 716)); b) using ANCOVA to analyze pretest/posttest data; and c) using ANCOVA to reduce error variance due to individual differences among subjects (pp. 716-717). Warner provides some restrictions on ANCOVA. These restrictions include a) the normality and linearity of the covariate, b) no interaction between covariate and the treatment, c) no influence of treatment on covariate (covariate should be measured before treatment), and d) measure of covariate should be reliable (p.

717). 161

In this study, the researcher used all the data in the database for 93 HSP students of six cohort groups. After cleaning the data, the final sample for the quantitative data analysis was 90 HSP students, consisting of 40 students who received participatory instruction from the third to fifth cohort groups and 50 students who received direct instruction from the sixth to eighth cohort groups, and then six (6) students (three students from each instructional type) were identified by a boxplot of SPSS explore outliers using initial perceived achievement score and IP team for the qualitative data analysis.

Population

Target population for this study was all of the graduates and undergraduates students who enrolled/took HSP 4510/5510 course in a Midwestern University campus from fall 2013 semester to summer 2015. The HSP 5510 course was delivered face-to- face, and the students accessed the course material through Blackboard learning management system.

The researcher was a Graduate Assistant, a programmer, a data analyst, a content designer, a course developer, a research methodologist, a reviewer, and an evaluator for the HSP 4510/5510 course from fall 2012 to spring 2015. The researcher had access to the sampling frame (enrollment list) and the existing database. The sampling frame of the students from spring 2013 to summer 2015 was collected from the project director.

This list enabled the researcher to know the number of students who were enrolled in the program every semester. The researcher collected the password to the Google Docs

MedTAPP online database from the project director. The database consisted of students’ 162 pre-and post-surveys and their journal reflections from spring 2013 to summer 2015. The rationale behind using HSP students enrolled in the HSP 5510 course was due to the interdisciplinary nature of the class. The intact class consisted of students that were admitted from different fields, namely, medicine, nursing, social works, nutrition, speech language pathology, physical therapy, audiology, and music therapy. The interdisciplinary nature of the class allowed students to bring different experiences, knowledge, and skills. In addition, the interdisciplinary nature of the class allowed students to communicate and share experiences in inter-professionals teams, and among professions teams. The interdisciplinary nature of the class enabled students to learn and understand the roles of other professions and how each professional role interrelates.

Furthermore, the interdisciplinary nature of the class enabled students to make collaborative decisions and to negotiate roles effectively with other professions on issues.

This composition of the class though heterogeneous might help students’ participatory learning in the final group module project activities. The population and sample sizes are shown in Table 1.

163

Table 1

Number of Students in the Population and Sample for Pilot and Main Studies

Met Cohort #Enroll C1 Pilot Main C2 #Phas1 C3 #Phas2

P1 Sp2013 24 1 23

P2 Su2013 22 4 18

P3 Fa2013 18 0 18 18 1 17 4

P4 Sp2014 16 5 11 11 11

P5 Su2014 13 1 12 12 12 3

D6 Fa2014 20 0 20 20 1 19 2

D7 Sp2015 18 1 17 17 17

D8 Su2015 15 0 none 15 1 14 3

Total 146 119 93 3 90 6 6

Note. Met = Method; P1 = participatory cohort 1; P2 = participatory cohort 2; …; D6 = direct cohort 6;…; D8 = direct cohort 8; Sp = spring; Su = summer; Fa = fall; C1 =

Criterion 1 (having pre-and post-test survey and journal reflection); C2 = Criterion 2

(meeting ± 3SD of ipAch score, fpAch score, and cpAch score, Mahalanobis test, & casewise diagnostic test); C3 = Criterion 3 (being a multivariate outlier of IP team, and major, or being outside ±2SD of ipAch score. fpAch score, and cpAch score; Phas1 = phase I; and Phas2 = phase II.

164

Sampling Plan

A convenience sampling technique (intact class) was used for the first phase of the study and then purposeful sampling technique for the second phase. The researcher had access to the sampling frame (students’ enrollment list) of the HSP students who were enrolled in the HSP 5510 course (see Table 1) and their existing database

(MedTAPP Google Drive). The participatory group (experimental group) comprised of

93 target students in the five cohort groups. Three out of the five cohort groups were chosen to represent a full 2013/2014 Academic Year. These three cohort groups were from three intact classes comprising 47 students. The students’ data were taken at two times that comprised of pre-and post- survey responses. Each data set was coded and labeled with a student’s given number to protect students’ identity. Students’ journal reflections were also retrieved from the database, coded, and labeled with their numbers.

Those students who had all of the three datasets and belonged to cohorts P3, P4, and P5 were selected for the quantitative data analyses. The three datasets were pre-survey data, post-survey data, and journal reflection data. With these criteria, cohorts D6, D7, and D8 comprised of 41 students met all of the criteria, except six (6) that did not meet all of the criteria. Examination on these six students revealed that some of the students did not have either pre-survey data, or post-survey data. A preliminary data screening was conducted; one of the students had an initial self-concept score above +3 standard deviations, was considered multivariate outliers with Malahanobis distance, p < .001, and was deleted from the sample for the quantitative data analysis during a casewise diagnostic test (Phase I) but added to the selected sample for the qualitative data analysis 165

(Phase II). Further examination of the data based on initial perceived achievement score, by major, team preference, and inter-professional teams revealed that 5 out of the remaining 40 students were considered multivariate outliers with Malahanobis distance, p

> .001 using boxplots of SPSS Explore. The sample size of the participatory group for the quantitative data analysis was 40 HSP students. Furthermore, these five (5) students and the one (1) student who was deleted using casewise diagnostics test were examined for patterns. The results revealed that three (3) students were from the same 2013/2014 fall cohort group, while the remaining students were from different cohort groups. Thus, for the qualitative data analysis, there was three (3) HSP students (i.e., one (1) student deleted from casewise diagnostic test and the other two students were gotten from the boxplot outliers).

The direct group (control group) comprised of 53 enrolled students in three cohort groups representing a full 2014/2015 Academic Year. The same criteria were repeated for the control group. With these criteria, 52 students were selected and one (1) did not meet all of the criteria. Preliminary data screening was conducted; two (2) of the 52 students had initial self-concept score of below -4 standard deviation from the mean, considered a multivariate outliers with Mahalanobis distance, p < .001, and was deleted from the sample of the quantitative data analysis (Phase I) but added to the selected sample for the qualitative data analysis (Phase II). Further examination of the data based on initial perceived achievement score, by major, team preference, and inter-professional teams revealed that three out of the remaining 50 students were considered multivariate outliers with Malahanobis distance, p > .001 using boxplots of SPSS Explore. The 166 sample size of the direct group for the quantitative data analysis was 50 HSP students.

These three (3) students and the two (2) students deleted using casewise diagnostic test were screened for patterns. The results revealed that three (3) students were from the same 2014/2015 fall cohort group, while the remaining student was from a different cohort group. Thus, for the qualitative data analysis, there were three (3) HSP students

(i.e., one (1) student who was deleted using casewise diagnostic test and the other two students were gotten from the boxplot outliers).

Meyers et al. (2013) noted that extreme values could distort the results of a statistical analysis (p. 37). In this study’s sampling, students who had z-score of initial perceived achievement scores fell above +2 or below -2 standard deviations were selected for the qualitative data analysis (the second phase). The rationale for doing this was not to lose those important participants since they might be eliminated through quantitative data analysis. The comments on the post-survey and weekly journals of these students were retrieved from the existing database for qualitative data analysis. According to

Teddlie and Yu (2007), “selecting those cases that are the most outstanding successes or failures related to some topic of interest … are expected to yield especially valuable information about the topic of interest… thereby allowing for comparability across those cases” (p. 81).

Sample Size Selection

The determination of sample size was a difficult issue. How large a sample should be has no definite answer. The sample size depends on three factors, namely: alpha level, power, and effect size (Agresti & Finlay, 2008). The alpha level refers to the 167 probability of rejecting null hypothesis when the null hypothesis is true, a Type I error, which will be set at .05. The power, a statistical test of a null hypothesis, is the probability of making the right decision and is defined as the probability of rejecting the null hypothesis when the null hypothesis is false. “An effect size identifies the strength of the conclusions about group differences or the relationships among variables in quantitative studies … it can be used to explain the variance between two or more variables or differences among means for groups” ( Creswell, 2014, p. 165).

In this study, G Power was used to compute the sample effect size. Post hoc, the acceptable risk of making a Type I error (alpha) was set at 0.05. For independent samples t-test, the effect size for difference between two independent means (two groups), two sample size (n1= 40, n2= 50), was computed.

Sample size. According to Warner (2013), hierarchical multiple regressions requires that the minimum ratio of valid cases to independent variables be at least 5 to 1

(p. 570). In this study, the ratio of valid cases (90) to number of independent variables (3) was 30 to 1, which was equal to or greater than the minimum ratio. The requirement for a minimum ratio of cases to independent variables was satisfied. In addition, the ratio of

30 to 1 satisfied the preferred ratio of 15 cases per independent variable

168

Figure 3. Showing sampling design for the phases 1 and 2

P stands for participatory instructional group for cohorts 3 to 5 (P3, P4, & P5), and D stands for direct instructional group for cohorts 6 to 8 (D6, D7, & D8).

169

The grant. The grant started mid-Summer 2012 for fall 2012 registration. The initial award was for 1 year and then renewed for 2 more years. The award ended in June

2015. The funding source was Medicaid Technical Assistance and Policy Program

(MEDTAPP) Healthcare Access Initiative (HCA) (see Appendix O, Table 113). The goal of the funder was to encourage and provide integrated training programs that support the Medicaid population in Ohio. Students in the HSP 5510/4510 course received a tuition waiver and a stipend of $3600 for the semester they participated in the course.

The conditions were that students would regularly participate in class activities and that their commitment was that their first professional position on graduation would be in a facility that served patients on Medicaid.

Instrument

The instrument, Institute of Medicine Self-rated Knowledge Achievement

(IOMSKA) survey (see Appendix D) was designed by the principal investigators because an actual achievement test would be difficult for this material to be valid. The instrument was validated in spring 2013 when it was used on the first cohort group. The IOMSKA survey measure perceived achievement levels of the health professions students of IOM core competencies in a group module project. The IOMSKA survey was designed to collect both quantitative and qualitative data. The survey had five items questions on

IOM standards. These questions were closed ended. Each question represented a construct. The questions were quantitatively rated on 7-point knowledge scale ranged from “No knowledge” (1) to “Expert” (7) with no additional labels marked. Thus, the instrument was used to measure overall perceived achievement. The survey also had five 170 items that were open-ended questions of the five constructs. The students were requested to provide comments on their perceived knowledge rated. Demographic information

(sex, status, major, team-preference, problems working in teams or working alone, benefits working in teams or working alone, and perceptions of other disciplines) was gathered as well in the survey. The principal investigators had developed the IOMSKA survey. The overall reliability was examined with the understanding that the standards were separate constructs. For the pilot study, the reliability coefficient for the five questions of the overall initial perceived achievement in Cronbach’s alpha was 0.68; and that of the overall final perceived achievement was 0.77. For the main study, the reliability coefficient for the five questions of the overall initial perceived achievement in

Cronbach’s alpha was 0.70 and that of the overall final perceived achievement was 0.84.

Data Collection

Ohio University IRB authorization was obtained for this study. Permission had been obtained for using the existing data from the database. An approval letter was obtained from the principal investigator to use the data for two years (see Appendix E).

Password to the Medicaid Technical Assistance and Policy Program (MedTAPP)

Database was obtained. MedTAPP database was created on an online Google Docs. The project director shared the database with the researcher.

Weekly students’ journals. A critical part of journal writing was reflection.

HSP students wrote weekly journals on their reflections and experiences in participating in the group module project using the IOM standards. The journal writing covered the class activities/assignments, the class plan, and the standards covered every week. These 171 journals were uploaded every week on to the database. On average, a student’s journal entry for a week was a half of a page. During fall and spring, for both students in the participatory and direct groups, they made a total of 14 journal entries. For the students’ in the participatory group, all of the journal entries were used for the qualitative data analysis; however, for the students’ in the direct group, only the first two weeks’ entries were used for the qualitative analysis in this study. This was because after the second week class meeting the students took their post-survey at the end of the class (see Figure

4). During summer, for both students in the participatory and direct groups, they made, on average, two journal entries per week for seven weeks making a total of 14 journal entries.

172

Wk1 Wk14

Participatory instruction

P1 P2

Wk1 Wk2

Direct Participatory instruction

instruction

D1 D2

Figure 4. Showing the weeks the pre-survey and post-survey questions administered

Note. P1 = participatory group pre-survey; P2 = participatory group post-survey; D1 = direct group pre-survey; D2 = direct group post-survey; Wk1 = First week meeting; Wk2

= Second week meeting; and Wk14 = Last week meeting

Data Collection Procedure

Data were retrieved from the HSP 5510 database from 2013 spring semester to

2014/2015 summer semester upon a request from principal investigators. One hundred and forty-six (146) HSP students enrolled in the HSP 5510 course. The existing data in the database were ideal. These data consisted of pre-and post-survey (quantitative and qualitative) data, weekly class plans, students’ weekly journal data, individual student’s assignments, students’ group module assignments and evaluations, and students’ final group module project and evaluations. 173

Data Analysis Procedure Phase 1- Quantitative Data

Assumptions. The researcher selected those students based on the assumption that the course syllabus, the class plans, the assignments, and class activities were the same for each semester. The course materials, technology issues, and resource materials were also the same for each semester. Blackboard was the main learning management system used to deliver course content. Students had access to the Internet and iPad.

There was two-hour face-to-face in-class meeting with instructor. It was further assumed that the pre-survey and post-survey situations were comparable despite the differences in the time, semesters, and teaching methods. Furthermore, the HSP students in the two groups-participatory group and direct group- were compared with respect to the change in overall perceived achievement scores.

The group module project was a project that included the entire IOM standards. It was assumed that each student had covered all prerequisites necessary for participating in the IOM standard. It was also assumed that the background factors (class size, total time at disposal, instructor utilization, access to resources, localities, and location in community), general environmental factors (region, cultural climate, political structure, general social conditions), individual factors (instructors, social background, prior training; students, social background, prior training, intelligence, personality traits, school motivation), and curriculum process (general method of teaching, actual use of teaching aids, homework, time for units) were constant.

Syllabus unit, school types, teaching time, grade, class size, teacher variable, regional differences, lessons, objectives, time, homework, general intelligence, team 174 preference and groupings were controlled. The students were familiar with the IOM competencies skills in their various disciplines. They were knowledgeable and skillful

(experts in their fields) in group projects and clinical cases during their high school education. These assumptions were examined in the pre-survey before the participatory instruction.

Statistical assumptions. Quantitative data analysis included selected students’ responses on the pre-survey items. The results from the pre-survey analysis helped in knowing initial perceived achievement of the students or the experiences individual students brought into the study. Initial perceived achievement of the students before the study was very important because the study was focused on comparing the differences in students’ perceived achievement scores on the IOM standards in a group module project after being taught using a participatory learning instruction. The independent variables were instructional types, team preference, IP team, and initial perceived achievement

(covariate) scores. The dependent variable was the final perceived achievement scores.

The descriptive statistics (mean scores, standard deviations, extreme values) were computed using SPSS Explore (boxplots) to identify any outlier cases on the categorical and scale variables. Inferential statistics (independent samples t-tests, partial correlation, univariate ANCOVA, and hierarchical multiple regression analysis) were computed to provide statistical evidence for change in scores, gain scores, difference in group mean scores, the nature, magnitude, and the impact of the effect due to participatory evaluation

(PE) instruction after controlling for the students’ initial perceived achievement scores.

The quantitative results were then used to plan for the qualitative follow-up journal 175 reflections retrieval for the six students from the database (Creswell, 2014). These results provided general insights into the qualitative data analysis. Patton (2002) notes “The failure to find statistically significant differences in comparing people on some outcome measure does not mean that there are no important differences among those people on those outcomes” and that “The differences may simply be qualitative rather than quantitative” (p. 151).

A significance level of .05 was used for all tests of hypotheses, except when it was necessary to carry out post-hoc analysis. The Statistical Package for Social Sciences

(SPSS) version 16 was used for all the quantitative data analyses. GPower 3.0.10 were used to compute the Effect Sizes for independent samples t-tests.

Independent t-test assumptions. The appropriate parametric significance test is the independent samples t-test. The independent samples t-test has three assumptions, namely, independence sample; normality; and equal variances. The assumption for independence was that each student’s initial perceived achievement score and initial self- concept scores, final perceived achievement score, and change perceived achievement score were interval level measurements and independent of the others. The assumption of normality was assessed through the Shapiro-Wilk normality test (or the Kolmogorov-

Smirnov test for normality), the change in perceived achievement score for student working in teams, p = .67 (or p = .20) and student working alone, p = .45 (or p = .20) were normal (see Appendix J, Table 90). Further, the assumption of normality was assessed through visual examination of histograms and normal probability plots of the scores; a box and whiskers plot of the scores within each group were used to identify 176 multivariate outliers. By examining the boxplot, it appeared that the student working in teams and student working alone change perceived achievement (cpAch) scores were relatively equal (see Appendix J, Figure 50) for instructional type. Finally, the assumption of equal variances of scores was assessed for violation through Levene test for equal variances (p = .06) across groups, supporting the assumption for equal variances between the student working in teams and student working alone scores. According to

Warner (2013, p. 222), when there was a violation of the assumption of equal variances of scores within groups and between groups, the data should be transformed, removed, or modified scores that are outliers.

ANCOVA assumptions. According to Meyers et al. (2013, p. 156), “ANCOVA provides us with a way to statistically control for one or more variables that we believe affect the dependent variable but which we cannot or choose not to experimentally control for.” To statistically control a variable, Meyers et al. provides the general structure of ANCOVA, including “adjusting the scores on the dependent measure based on the covariate and performing an ANOVA on the adjusted scores” (p. 158).

Comparison of groups, according to Meyers et al., is “accomplished by performing alpha- corrected t-tests on the adjusted (estimated marginal) means (means that are adjusted for the effects of the covariates)” (Meyers et al., 2013, p. 201). Covariates refer to continuous variables “that are not parts of the main experimental manipulation, but have an influence on the dependent variable” (Field, 2013, p. 479). Field noted that covariates are introduced into ANOVA so as “To reduce within-group error variance” (p. 480) and to eliminate possible confounding variables. Field (2013) stressed that ANCOVA is a 177 linear model; that independence of the covariate and treatment effect, and homogeneity of regression slopes assumptions should be considered (p. 484).

According to Warner (2013), the rationale for adding a covariate may be to increase the opportunity to find statistical significance for the factors. Each independent variable is only attributed to the variance in the dependent variable that is uniquely explains. The inclusion of a covariate independent variable decreases the variance in the dependent variable that is to be explained by the factors. With less variance to explain, the chances that the factor will explain a significant portion of the variance increases.

Other reasons include covariate is an effort to account for the effect of a variable that does affect the dependent variable, but could not be accounted for in experimental design; the covariate is used to control for variables that rival the independent variable of interest in observation studies (Warner, 2013, p. 689).

Researcher decided to employ ANCOVA because the study sample was from intact classes (i.e., convenience sample); because the relationship between actual and final perceived achievement scores are far from perfect; and because the purpose of the study was to compare effectiveness of two instructional types. It is believed that

ANCOVA provides a fair ground to the adjusted and unadjusted final perceived achievement mean scores, controlling for the initial perceived achievement scores.

ANCOVA would help the researcher to identify the proportion of the dependent variable

(final perceived achievement score) that was uniquely explained by the third variable

(e.g., major, or team preference, or IP team). Before conducting an ANCOVA a homogeneity assumption of regression (slope) was tested. The test evaluated the 178 interaction between the covariate and the fixed or factor (independent variable) in the prediction of the dependent variable. According to Warner (2013), a significant interaction between the covariate and the factor indicates that the differences on the dependent variable among groups vary as a function of the covariate. Warner noted that for comparison of teaching methods, a covariate should be measured prior to teaching intervention so that it could not interact with the teaching treatment or intervention (p.

694); the covariate and treatment interaction should not be significant; measures of covariate should be highly reliable (Warner, 2013). Warner stressed that after adjustment for covariate, the ranking ordering of the dependent variable means across treatment groups should not change drastically or differ greatly across the factor groups. In this study, unit of analysis was factor groups (i.e., instructional type, team preference, major,

IP team, gender, and status), covariate was initial perceived achievement score, and the dependent variable was final perceived achievement scores.

Warner (2013) provides some strategies of screening data for the ANCOVA assumption violations. First, examining the histograms for dependent variable and each of the covariates are “approximately normal in shape with no extreme outliers” (p. 694).

Second, examining the scatter plots between the dependent and each covariate and between “all pairs of covariates are approximately linear without extreme bivariate outliers” (p. 694). Third, evaluating the homogeneity of equal variances for the dependent variable and assessing “the degree to which the covariate is confounded with levels of” (p. 694) the independent variable. Finally, assessing the possibility of 179 treatment and covariate interactions for each covariate should be non-significant (Warner,

2013, pp. 694-697).

Bivariate data screening. Comparing means of quantitative variables across groups, the data analysis methods should have the following. All of the scores on the quantitative variables should be normally distributed. All of the observations should be independent. All of the population variances should be equal (Field, 2013; Meyers et al.,

2013; Warner, 2013).

Assessing the relationship between two quantitative variables (dependent and independent variables), the data analysis methods should have the following. Scores on independent and dependent variables should each have a univariate normal distribution shape. The joint distribution of scores on independent and dependent variables should have a bivariate normal shape without extreme bivariate outliers. The independent and dependent variables should be linearly related. The variance of the dependent variable scores should be the same at each level of the independent variable (homogeneity of variance assumption) (Warner, 2013, p. 164).

Multiple regression assumptions. Assumptions behind multiple regression, according to Argyrous (2005), are a) the “dependent variable is measured on an interval/ratio scale”; b) “the independent variables are measured on interval/ratio scales or are binomial”; c) “observations for each case in the study are independent of the observations for the other cases in the study”; d) “the relationship between the independent variable is linear”; e) “the error terms are normally distributed for each combination of the independent variables”; f) “the error terms are of equal variance 180

(homoscedasticity)”; and g) “each of the independent variables is independent of each other (there is no multicollinearity)” (Argyrous, 2005, p. 198). Argyrous suggests a way of entering variables into blocks, noting that students’ background variables should be entered the first block, and then behavioral variables the next block.

Hierarchical (blockwise entry) multiple regressions is employed to evaluate the relationship between a set of independent variables and the dependent variable after controlling for or taking into account the impact of a different set of independent variables on the dependent variable. In hierarchical multiple regression, Field (2013) notes that “predictors are selected based on past work and the researcher decides in which order to enter the predictors into the model;” and that “known predictors should be entered into the model first in order of their importance in predicting the outcome” (p.

322). In this study, the initial perceived achievement (ipAch) score was entered for the first block, and then all of the dummy variables (i.e., participatory instruction dummy variable (trtP-ipAch) and direct instruction dummy variable (trtD-ipAch)) of instructional type for the last block. The dependent variable was final perceived achievement (fpAch) score. Other dummy variables of major, gender, status, team preference, and inter- professional teams were used. Age variable was not used in the main analysis because it was skewed (2.56; Kurtosis = 8.20) and tests for normality was significant for both

Kolmogorov-Smirnov (K-S) statistics was .289 (p = .001) and Shapiro-Wilk (S-W) statistics was .729 (p = .001) (See Appendix J, Table 86).

Final perceived achievement score and initial perceived achievement score are interval, satisfying the metric or dichotomous level of measurement requirement for 181 independent variables. Instructional or treatment type, major, team preference, IP team, gender, and status are dichotomous, satisfying the metric or dichotomous level of measurement for independent variables.

Linearity test. The correlation between direct instruction and final perceived achievement score was negative and statistically significant, r(88) = -.28, p = .008; participatory instruction and final perceived achievement score was positive and statistically significant, r(88) = .39, p = .001 (see Table 25). Linear relationships existed between these variables.

Normality of the predictor variable. The independent variables, treatment and initial perceived achievement, satisfied the criteria for a normal distribution. The skewness of the distribution (0.95) was between -1.0 and +1.0 and the kurtosis of the distribution (-0.05) was between -1.0 and +1.0.

Outliers in the analysis. If cases have a standardized residual larger than ±3.0,

SPSS creates a table of Casewise Diagnostics, in which it lists the cases and values that results in their being outliers. If there are no outliers, SPSS does not print the Casewise

Diagnostic table. In this study, there was a table for this problem. Cases 2, 52, and 83 were listed. After further investigation through Mahalanobis statistic values, these cases were listed; comparing their Mahalanobis statistic values with Chi-squared critical value at .001 level, the values were significant; and they were deleted from the main original data (N = 93). The remaining cases (N = 90) data was rerun again. The test results show that there was no Casewise Diagnostics table, though Mahalanobis statistic listed some cases that were not significant after the Chi-squared tests. Further verification was done 182 using standardized residual from Residual statistic. From Residuals statistic, the standardized residual ranged [-2.66, 2.56] and less than ±3.0. The tolerance values for all of the independent variables were larger than 0.10; participatory instruction (.98), and initial perceived achievement (.98). So multicollinearity was not a problem in this regression analysis. The Durbin-Watson statistic for this problem was 2.06 which fell within the acceptable range of 0 to 4. If Durbin-Watson statistic is approximately 2, then the residuals are uncorrelated. If a value is close to 0 then there is a strong positive correlation, while a value of 4 indicates strong negative correlation.

Pre-surveys were the measures of the learners’ initial self-concept of IOM competency skills before the learners were exposed to both treatments (participatory instruction and direct instruction). The sample came from intact classes that consisted of

HSP students from different fields. This suggests that the sample data were heterogeneous, and could not satisfy the assumptions for parametric statistics (Warner,

2013). Since the study used independent samples t test, univariate ANCOVA, partial correlation, and hierarchical multiple regressions, the data needed to be screened

(Warner, 2013; Meyers, Gamst, & Guarino, 2013). According to Meyers et al. (2013),

“the accuracy of the data involves researchers carrying out a process of data cleaning”; and “the extent to which the data have met the important assumptions (normality, linearity, homoscedasticity, and independence of errors)” (p. 37) is a complex issue involving data screening. Meyers et al. (2013) note that extreme values can distort the results of a statistical analysis (p. 37). 183

Data gathered were analyzed using the following methods: Demographics- frequency distributions and percentages describe the demographics of the research: gender (female and male), status (undergraduate (ungrad), graduate (grad)), team preference (Working in teams (Wktm), Working alone (Wkaln)), major (Nursing (BSN),

Physical Therapy (PT), Nutrition(NUT), Speech Language Therapy (SLP), Social Works

(SW), Others (Medicine, Music Therapy, and Audiology)), IP team (A, B, C, D, E, F, G,

H, K) (see Appendix M, Table 112), Instructional type ( (participatory group (trtP), direct group (trtD)), initial perceived achievement score (ipAch), final perceived achievement score (fpAch), initial self-concept in patient-centered care score (Rp), initial self-concept in interdisciplinary teamwork score (Rtw), initial self-concept in evidenced-based practice score (Re), initial self-concept in quality improvement score (Rq), initial self- concept in informatics score (Rinf), final self-concept in patient-centered care score

(Rp1), final self-concept in interdisciplinary teamwork score (Rtw1), final self-concept in evidenced-based practice score (Re1), final self-concept in quality improvement score

(Rq1), final self-concept in informatics score (Rinf1), perceived achievement scores

(pAch), change in overall perceived achievement scores (cpAch), initial perceived achievement scores due to only participatory instructional type (trtP-ipAch), initial perceived achievement scores due to only direct instructional type (trtD-ipAch).

Data were reported as raw data and when necessary for clarity, were converted to scores, averages, effect sizes, or proportions (percentages). Age variable had failed to be normal and percentage distributions of age might distort reality. Terms used throughout the data analyses include differences, change, gains, association, correlation, variance, 184 covariance, relationships, effectiveness, impact, and influence. The taxonomy of measures includes central tendency, variability (dispersion, range-based measures), and symmetry (skewness, kurtosis).

Comparison of the mean for the two groups. Independent samples t-test analyses compared the means of the two groups on various variables of achievement and self-concepts. The means were used because they tend to vary less than the other means

(trimmed mean, and winsorized means) of central tendency and they are the points around which the other scores cancel out. The means are also good descriptive measures of the centrality of scores and the points in a distribution around which the variation of the scores is minimized. Using the null hypothesis that two samples come from populations having the same means, the independent samples t-statistic compares the observed difference between sample means to the expected variation within both samples and is specified by the degrees of freedom (Warner, 2013). The effect sizes were computed. The rationale for including the statistical assumptions was that the researcher will be using this study as a teaching manual or tool; however, most doctoral students’ dissertation excluded these assumptions.

Independent samples t-test. Hypotheses 1 and 2 were tested using independent samples t-test. Independent samples t-test was used for both hypotheses because it compared the change in overall perceived achievement scores of the HSP students who were taught using participatory instruction and those who were taught using direct instruction; and HSP students who preferred working in teams and those who preferred working alone. In order to assess the perceived gains due to an instructional type on the 185 perceived achievement of IOM standards in a module project, the paired raw scores of pre-survey and post-survey scores were used. The change perceived scores were computed for each student on each standard. For hypothesis 1, a table was prepared. The table was on change in overall perceived achievement scores for each instructional type, gains, Effect Sizes for change in overall perceived achievement and change perceived self-concepts in each standard. Hypothesis 2 had four sub-hypotheses and a table for each sub-hypothesis making a total of four tables. Each table has change in overall perceived achievement scores for each team preference, gains, Effect Sizes for the change in overall perceived achievement score and change in perceived self-concept score on each standard.

ANCOVA. Hypotheses 3, 4, 7, and 8 were tested using analysis of covariance

(ANCOVA) with the initial perceived achievement score as the covariate. Analysis of covariance was used because the study involved existing intact classes of groups of students who might have differed in their initial perceived achievement scores on the

IOM standards. Analysis of covariance adjusts the scores statistically to account for the possibility of preexisting conditions. The covariate (initial perceived achievement score as independent variable), categorical variables (instructional type, IP teams, team preference, majors), and the dependent variable (final perceived achievement score).

For each hypothesis, five tables were prepared. The first table presented contained the means and standard deviations of the unadjusted and adjusted of final perceived achievement scores, and the means and standard deviations of the initial perceived achievement scores; the second table was an ANOVA summary table showing the 186 significance of the difference between the independent variable and the final perceived achievement mean scores; the third table was an ANCOVA Type III Sum of Squares summary table showing the non-significance interaction of the independent variable and the covariate; the fourth table was an ANCOVA Type I Sum of Squares summary table showing the significance of the difference between adjusted means; and the five table was a parameter estimates summary table showing the slope coefficients providing information about contrasts between the adjusted final perceived achievement means of the dummy variables of the independent variable.

Partial correlation and hierarchical multiple regressions. Hypotheses 5, 6, and 9 were tested using partial correlation and hierarchical multiple regressions with the initial perceived achievement scores as the covariate (or entered first). Partial correlation and hierarchical multiple regressions were used because the study involved existing intact groups of students who might have differed in their initial perceived achievement scores, their self-concept scores on IOM standards. The interaction between the covariate and the independent variable was not significant. The predictor variables were initial perceived achievement score, and dummy variable scores of instructional type, IP teams, team preference, and majors; and the dependent variable (final perceived achievement score). In the analysis, initial perceived achievement score was entered first, then followed by the dummy variable scores of either majors, or IP teams, or team preference, and then last dummy variable scores of instructional type. For each hypothesis, three tables were prepared. The first table contained the correlation coefficients for final perceived achievement versus initial perceived achievement, and final perceived 187 achievement versus the dummy variable scores of the predictor involved without controlling for initial perceived achievement scores. The second table presented the partial correlation coefficients for final perceived achievement versus the dummy variable scores of the predictor involved, controlling for initial perceived achievement score. Finally, the third table was on the hierarchical multiple regression model summary coefficients.

Data Analysis Procedure: Phase 2-Qualitative Data

Qualitative journal reflection. Journal analyses were employed using themeing the data and elaborative coding methods. Themeing the data was used as the first cycle coding method. According to Saldaňa (2009), ‘Themeing the Data’ provides “a brief profile of that process” (p. 139). A theme is a phrase or sentence that identifies what a unit of data is about and /or what it means. In this study, the category or units of analysis were the five standards, individuals, and groups. Assignments and the class plans were used to categorize the textual data into five categories (IOM standards) at the first cycle.

Then elaborative coding was used as the second cycle coding method. According to

Saldaňa (2009), elaborative coding method is “‘top-down’ coding” (p. 168) because IOM standards were used to categorize the textual data in the first cycle. Elaborative coding builds on the quantitative results at phase 1. Saldaňa (2009) notes, “This method can support, strengthen, modify, or disconfirm the findings from previous research” (p. 168).

In this study at the second cycle coding, keywords (think, feel, like, interested, aware, helpful, useful, important, surprise, excited, explain, because, and others) were used to identify sentences from the categorized textual data. Microsoft Excel “data-sort” 188 command was used to sort these keywords, and themes. Keywords of the assignments were color coded in the students’ weekly written journals. A “keyword” column was created for the keywords. Using the Excel “data-sort” command, the keywords were arranged alphabetically. Another column, the “skills column” was created. In this column, each keyword of the cells under the “keyword” column had its category of IOM standard typed. Using the Excel “data-sort” command on the “skill” column, the IOM standards were arranged in alphabetical order. The content of the cell for each standard was read. Statements or sentences of underlying themes were teased out to explain the change due to participatory or direct instruction on the students’ self-concepts in the standards of a group module project.

Credibility. Researcher increased credibility using triangulation (Patton, 2002).

The “concept behind triangulation is that since all methods have limitations, combining methods can help minimize the overall limitations of the study” (Patton, 2002, p. 252).

The purpose of triangulation is to test for consistency, with the idea being that inconsistencies are actually good because real life is often inconsistent. Researcher sought to understand the inconsistencies since triangulation allows seeing the same thing from different perspectives (Patton 2002).

Respondent validation. Attention to respondent’s validation is very important.

For example, Torrance (2012) notes, “attention to respondent’s validation is a significant issue for methodological debate” (p. 111). Torrance asserts that attention “should be an important aspect of the development of democratic participation” (p. 111). 189

Triangulation. Torrance (2012) notes, “there is no single method that affords a comprehensive account of the phenomenon under investigation” (p. 113). Torrance suggests that two or more methods should be employed to solve the problem (p. 113).

Torrance notes that different perspectives can be generated for understanding depth and breadth of the problem. He suggests “that different data sources may generate discrepant accounts, and should be interpreted and ask for further investigation” (Torrance, 2012, p.

113).

Aspects of triangulation. There are several aspects of data triangulation.

According to Torrance (2012), there are four essential aspects of triangulation. The first two aspects include triangulation of data (different sources, accessed over time); and triangulation of investigators (use of teams if possible) (Torrance, 2012). The last two include triangulation of method (observation, interview, survey); and triangulation of theory (the bringing to bear of different theoretical perspectives on the data in order to generate different interpretive accounts) (Torrance, 2012).

Pilot Study

A pilot study took place before the main study. The results of the pilot study were used to determine feasibility and detect the possible errors in the process of data retrieval, management, and analysis. It described when, where, and how the study was carried out.

After the pilot study, the researcher decided whether the proposed procedures need to be revised or not (Cohen, Manion, & Morrison, 2011).

IRB Procedures. To comply with Ohio University’s ethical guidelines for research, any research studies that involve human subjects are required to undergo review 190 by the Ohio University Institutional Review Board (IRB). The IRB ensures that the purpose and procedure of the research study are in compliance with the prescribed guidelines of ethical research and cause no harm to research participants. The IRB also protects the privacy for all research participants. The researcher obtained approval for

IRB (see Appendix F) prior to implementing the study on May 2015 to start the pilot study. All HSP students in the study were older than 18 years; however, the researcher was using existing data so signing of the consent forms did not matter (see Appendix E).

The researcher obtained permission for using MedTAPP database the program director created on Google Docs. The researcher got access to the database and became familiar with the data in the database. The researcher examined the course plan, the course syllabus, class activities, assignments, students’ weekly journal reflections, students’ final projects, and students’ survey data.

Subject-matter expert. The subject matter expert included an experienced instructor who practices inter-professional and team-based learning for a long time, and a peer mentor who had enrolled in the HSP 5510 course. The instructor was a coach or a mentor in the classroom instructional delivery.

Participatory classroom environment. Participatory classroom environment encouraged teamwork and sharing of experiences. Students were autonomous of their learning. They were co-researchers. The instructor was a coach, a researcher, and a stakeholder. This environment provided opportunity for the students to discuss, collaborate, negotiate roles, advocate, solve problems, and make team decisions. The classroom environment was very interactive. 191

The content. The content of the HSP 4510/5510 (see Appendices B and C) consisted of five modules, namely, patient-centered care module, interdisciplinary team module, evidence-based practice module, quality improvement module, informatics module, and a final module project. The topics covered included dog bite, diabetes, patient safety, wearable technology, what I actually do, word on the street, Aurasma app,

Explain Everything app, and getting to know. Other topics included medical controversy, patient provider, professional description, TIA, antibiotic conflict, Bloglovin, alternative communication, and AAC time to talk. Further topics were Twitter, LinkedIn, Facebook, collaboration apps, digital footprint, expert opinion, HIPAA healthcare app, Cranial nerves, burn victim, Google Drive app, Google Hangout app, Penultimate app, Pinterest app, and Voice Thread app (DiGiovanni & McCarthy, in press).

Learning objectives of HSP 5510 course. Learning objectives of the course were to:

Objective 1. Design and create interactive online learning activities on providing patient-centered care. This standard has five (5) sub-objectives, including to a) “share power and responsibility with patients and caregivers”; b) “communicate with patients in a shared and fully open manner”; c) “take into account patients’ individuality, emotional needs, values, and life issues”; d) “implement strategies for reaching those who do not present for care on their own (care strategies that support the broader community)”; and e) “enhance prevention and health promotion” (Greiner & Knebel, 2003, pp. 52-53).

Objective 2. Design and create interactive online learning activities on working in interdisciplinary teams. This standard has eight (8) sub-objectives, including to a) 192

“learn about other team members’ expertise, background, knowledge, and values”; b)

“learn individual roles and processes required to work collaboratively”; c) “demonstrate basic group skills (communication, negotiation, delegation, time management, and assessment of group dynamics)”; d) “ensure that accurate and timely information reaches those who need it at the appropriate time”; e) “customize care and manage smooth transitions across settings and over time, even when the team members are in entirely different physical locations”; f) “coordinate and integrate care processes to ensure excellence, continuity, and reliability of the care provided”; g) “resolve conflicts with other members of the team”; and h) “communicate with other members of the team in a shared language, even when the members are in entirely different physical locations”

(Greiner & Knebel, 2003, pp. 54-56).

Objective 3. Design and create interactive online learning activities on employing evidence-based practice. This standard has four (4) sub-objectives, including to a) “know where and how to find the best possible sources of evidence”; b) “formulate clear clinical questions”; c) “search for the relevant answers to those questions from the best possible sources of evidence (that evaluate or appraise the evidence for its validity and usefulness with respect to a particular patient or population)”; and d) “determine when and how to integrate these new findings into practice” (Greiner & Knebel, 2003, pp. 56-58).

Objective 4. Design and create interactive online learning activities on applying quality improvement. This standard has five (5) sub-objectives, including to a)

“continually understand and measure quality of care in terms of structure, or the inputs 193 into the system, such as patients, staff, and environments; process, or the interactions between clinicians and patients; and outcomes, or evidence about changes in patients’ health status in relation to patient and community needs”; b) “assess current practices and compare them with relevant better practices elsewhere as a means of identifying opportunities for improvement”; c) “design and test interventions to change the process of care, with the objective of improving quality”; d) “identify errors and hazards in care; understand and implement basic safety design principles, such as standardization and simplification and human factors training”; and e) “both act as an effective member of an interdisciplinary team and improve the quality of one’s own performance through self- assessment and personal change” (Greiner & Knebel, 2003, p. 59).

Objective 5. Design and create interactive online learning activities on utilizing informatics. This standard has five (5) sub-objectives, including to a) “employ word processing, presentation, and data analysis software”; b) “search, retrieve, manage, and make decisions using electronic data from internal information databases and external online databases and the Internet”; c) “communicate using e-mail, instant messaging, listservs, and file transfers”; d) “understand security protections such as access control, data security, and data encryption, and directly address ethical and legal issues related to the use of information technology in practice”; and e) “enhance education and access to reliable health information for patients” (Greiner & Knebel, 2003, pp. 60-63).

Sample module on the IOM core competencies (IOM Standards). Instructor provided students with a sample past module iBook project to analyze and create a new module based on the five IOM standards. These activities included analyzing old module 194 sample, identifying oversights and errors, and build a new one. There are opportunities for students to work individually, in professional teams, and in inter-profession team.

The tasks comprised of examining a sample module project that employed a) patience- centered component, b) interdisciplinary team component, c) evidence-based component, d) quality improvement component, and d) informatics component. Each component has activities knowledge and skills. Each component lasted for a week. These five components were chosen to fit the instructional objectives of the class plan. This decision was based on IOM standards and results from the pilot study.

A sample module project was used to explore the effectiveness of the participatory instruction on students’ perceived self-concepts, students’ perceived self- efficacy, and students’ subsequent (final) perceived achievement score. The module project of participatory instruction could be a fundamental activity that utilized IOM standards effectively to impact students’ perceived self-efficacy. In the module project activities, students performed activities individually, in their professional teams, and then in their inter-professional teams.

195

Figure 5. Example of students’ work showing various components of IOM standards of a group module project

Sample module iBook project. The sample module project had five practice components. These components were aligned with IOM core standards.

Component 1. Component 1 provided a practice opportunity following the demonstration of a module lesson using participatory interdisciplinary teamwork method in a group module project.

Component 2. Component 2 provided a practice opportunity following the demonstration of a module lesson using participatory evidence-based practice method in a group module project. 196

Component 3. Component 3 provided a practice opportunity following the demonstration of a module lesson using participatory quality improvement method in a group module project.

Component 4. Component 4 provided a practice opportunity following the demonstration of a module lesson using participatory informatics method in a group module project.

Component 5. Component 5 provided a practice opportunity following the demonstration of a module lesson using participatory patient-centered care method in a group module project.

Class activities. Students engaged in designing and creating an online interactive instructional module.

Participatory learning activities. The participatory activities comprised of instructional objectives, mode of practice, and overall iBook module.

Instructional objective. Given a sample final module project, student could create an online interactive instructional module within five weeks and meet all criteria specified on the checklist.

Mode of practice. On the sample module project, the sequence of events for each component was as follows: a) recognizing individually (identify, document the key module component skills), b) editing in professional teams (share, discuss, refine, negotiate, critique, review, and document the key module component skills), and c) producing in inter-professional teams (imitate, collaborate, build, design, create a group module iBook on IOM standards). 197

Activities on patient-centered care. Student individually studied, identified and documented a sample module component for patient-centered care. Students in their professions teams discussed, revised, and documented the processes involved in the sample module component for patient-centered care. Students in inter-professions teams shared their revised documents and created a new module component for patient-centered care.

Activities on interdisciplinary team. Student individually studied, identified and documented a sample module component for only interdisciplinary team. Students in their professions teams discussed, revised, and documented the processes involved in the sample module component on interdisciplinary team. Students in their inter-professions teams shared their revised documents and created a new module component on interdisciplinary team.

Activities on evidence-based practice. Students individually studied, identified and documented the processes involved in the sample module component on evidence- based practice. Students in their professions teams discussed, revised, and documented the processes involved in the sample module component on evidence-based practice.

Students in their inter-professions teams shared their revised documents and created a new module component on evidence-based practice.

Activities on quality improvement. Student individually studied, identified and documented the processes involved in the sample module component on quality improvement. Students in their professions teams discussed, revised, and documented the processes involved in the sample module component on quality improvement. 198

Students in their inter-professions teams shared their revised documents and created a new module component on quality improvement.

Activities on informatics. Student individually studied, identified, and documented the processes involved in the sample module component on informatics.

Students in their professions teams discussed, revised, and documented the processes involved in the sample module component on informatics. Students in their inter- professions teams shared their revised documents and created a new module component on informatics.

Overall activities on an iBook module. Student individually studied samples module project components that use an iBook format. Students in their professions teams discussed, revised, and documented the processes involved in the samples module project components that use an iBook format. Students in their inter-professions teams shared their revised documents, and created a new module iBook.

Patience-centered care. Using the final sample module components, in participatory practice, students’ self-concept on the component will increase as the students study, identify, revise, and document the processes on the sample module. For example, if students are given opportunities to practice individually, then in their professions teams, and then in their inter-professions teams their self-concept on the standards will be improved and their subsequent perceived self-efficacy will be higher, and their final perceived achievement on the IOM standards of module project will increase. Thus, instructors should create opportunities for students to practice on their own, then in their professions teams, and then in their inter-professions teams. As 199 students are given opportunity to practice using recognize-edit-produce model of Gropper

(1983b) individually, in their professions teams, and in their inter-professions teams their final perceived self-concepts on IOM standards will be higher.

Interdisciplinary team. As students discuss in teams past students module projects, they will improve their subsequent self-concept, increase their subsequent perceived self-efficacy, and their final perceived achievement will rise in module creation.

Evidence-based practice. As students take more active role in sharing experience with other members in the team, they will improve their initial perceived self- concepts on the content.

Quality improvement. As students interview experts in their own field they will improve on their self-concepts, and learn the strategies in communication.

Informatics. As students discuss and share their experiences they improve their self-concepts and their self-efficacy on IOM standards, component skills, and create useful module iBook.

Data retrieved. The researcher retrieved both qualitative and quantitative data from the Google Docs MedTAPP database. The data analysis had two phases. These phases were quantitative data analysis phase 1 and qualitative data analysis phase 2.

Students wrote weekly journals on their reflections and experiences on their participatory learning.

Scoring level of perceived achievement. Each student’s self-rated scores on the five items (IOMSKA survey) were added to give an overall weighted perceived 200 achievement score. The expected maximum overall perceived achievement score was 35, and the minimum overall perceived achievement score was 5. Similarly, the expected minimum self-concept score on an item was a 1 and the maximum self-concept score was a 7 on an item. Using these weighted sums helped in calculating the change perceived score of a student’s perceived achievement scores from pre- to post-survey. The sum of the pre-survey measures on all standards for a student reflected the student’s prior

(initial) perceived achievement score. The sum of the post-survey measures on all standards for a student reflected the student’s subsequent (final) perceived achievement score due to participatory instruction over time.

Phase 1 pilot study- quantitative data analysis. Quantitative data analysis included 119 students’ responses on the pilot pre-survey. Descriptive statistics (mean scores, standard deviations, extreme values) were computed using SPSS, and box-plots to identify the cases whose initial perceived achievement scores were multivariate outliers, and bar charts were drawn. Inferential statistic (independent sample t-tests) was computed to provide statistical evidence for gain on the change perceived self-concept scores due to participatory intervention.

Criteria for inclusion in phase 1. The criteria were that a student should be enrolled in the HSP 5510 course. The student should participate in both pre- and post- surveys at the beginning and in the end of the semester respectively. The student should write weekly journals throughout the semester. With these criteria, 119 students were selected for the quantitative data analysis. 201

Phase 2 pilot study- qualitative data analyses. For comments analyses, student’s comments on their knowledge skill rating were organized with respect to the five IOM core competencies. Qualitative data analysis, themeing the data and elaborative coding were used to explore the emerging themes that provided insight and understanding of the IOM standards. For journal analyses, student’s weekly journals were organized using keywords with respect to topics of their class plan (see Appendix L, Table 98).

These keywords were mapped to the five IOM standards. The keywords were used to explore the emerging themes that were consistent in the students’ reflections that provided insight and understanding of the IOM standards.

Case selection. From the quantitative data analysis results, those students whose scores were extreme values, lower and higher scores on each IOM competence skill, and then satisfied at least 3 out of 5 IOM core competence skills were then selected for the qualitative data analysis. A student should have a rating score of Low (1 or 2) or High (6 or 7) on each of the IOM standards. A student should have rating scores of low or high on at least 3 out 5 (60%) of the IOM standards. With these criteria, nine (9) students were selected for the qualitative data analysis; specifically, five (5) came from the participatory group, and four (4) were from the direct group.

Pilot study results. The results from the pilot study (see Appendix G) informed this study in several ways, including formulation of appropriate research questions and hypotheses; clarification of reliability issues; selection of appropriate research design; selection criteria for cases of qualitative phase; and selection of appropriate statistical tools for quantitative data analysis. First, some of the research questions were revised 202 based on the nature of research design (quasi-experimental design) and sampling technique (convenience sample) adopted in the data generation (see Appendix N).

Second, the notion that the instrument was a self-reported measure, rather than knowledge measure was identified. Thus, original ‘knowledge achievement’ was changed to ‘perceived achievement.’ A low reliability coefficient was detected for initial self-concept on patient-centered care for the pilot study and the main study at the pre- survey. The issues of Cronbach alpha and test-retest were raised. Since the researcher used existing data, there could be no revision on the item that measure low reliability coefficient. Thus, the researcher decided to support the quantitative data with qualitative data using students’ journal reflections and their comments on the survey. Third, the criteria, 3 out of 5 IOM standards for selecting cases for qualitative data analyses were dropped during the main study because there was no case found for the direct instruction group. Through data screening process, it appeared that students who had z-score of initial perceived achievement scores fell above +2 or below -2 standard deviations were selected for the qualitative data analysis (the second phase) (see Table 1). The rationale for doing this was not to lose these important participants since they might be eliminated through quantitative data analysis. Finally, the idea of using regression analysis,

ANOVA, and paired samples t-test were dropped. Hierarchical multiple regressions and partial correlations were used in the main study because of lack of random assignment of sample rather a convenience sampling (intact classes) technique was used. 203

Chapter 4: Results

This chapter provides an analysis of the data collected from 90 Health Sciences and Professions (HSP) undergraduate and graduate students of a Midwestern University.

This chapter will discuss descriptive statistics, test hypotheses, and provides the findings of this study.

Research Questions and Hypotheses

Research question 1. Do HSP students who are taught using a participatory instruction have greater gain on the change in overall perceived achievement scores than the HSP students who are taught using the direct instruction?

Hypothesis 1. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students taught without using the participatory instructions.

(퐻0: 휇푔푝퐴푐ℎ−푝푎푟푡 − 휇푔푝퐴푐ℎ−푑푖푟푒푐푡 = 0)

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for HSP students taught without using participatory instruction.

(퐻푎: 휇푔푝퐴푐ℎ−푝푎푟푡 > 휇푔푝퐴푐ℎ−푑푖푟푒푐푡).

Research question 2. How do HSP students feel about team preference on a group module project with regard to participatory and direct instructional types? 204

Hypothesis 2a. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using the participatory instructions. (퐻01 : 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 −

휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction. (퐻푎1: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 >

휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡).

Hypothesis 2b. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction. (퐻02: 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 =

0)

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working alone taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone 205 taught without using participatory instruction.

(퐻푎2: 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡)

Hypothesis 2c. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction. (퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone taught using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡).

Hypothesis 2d. Null hypothesis (Ho): There is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction.

(퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 = 0).

Alternative hypothesis (Ha): The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred 206 working alone taught without using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 >

휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡)

Research question 3. How does the participatory instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores?

Hypothesis 3. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction and the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various majors taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡).

Research question 4. How does the direct instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores?

Hypothesis 4. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a direct instruction and the final perceived achievement 207 scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 −

휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various majors taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡).

Research question 5. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Hypothesis 5. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The perceived achievement score for group of HSP students from various majors taught using participatory instruction will have positive and statistically significantly impact on the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. 208

Research question 6. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Hypothesis 6. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The perceived achievement score for group of HSP students from various team preferences taught using participatory instruction will have positive and statistically significantly impact on the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Research question 7. How does a participatory instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their

IP teams, controlling for their initial perceived achievement scores?

Hypothesis 7. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0) 209

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡).

Research question 8. How does the direct instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores?

Hypothesis 8. Null hypothesis (Ho): There is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 −

휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0)

Alternative hypothesis (Ha): The final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡).

Research question 9. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their IP 210 teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Hypothesis 9. Null hypothesis (Ho): There is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Alternative hypothesis (Ha): The perceived achievement score for group of HSP students from various IP teams taught using participatory instruction will have positive and statistically significantly impact on the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Research question 10. How do the HSP students’ journal reflections help explain their self-concept on standards of a group module project?

Preliminary Data Analyses

Reporting the results of data screening. To begin with the data analysis, all variables had been screened for possible missing cases and statistical assumption violations, and for missing values and outliers, with IBM SPSS Frequencies, Explore,

Plot, Missing Value Analysis, and Regression procedures. The 90 HSP students were screened for missing values on three continuous variables (change in overall perceived achievement and self-concept score, initial perceived achievement and self-concept scores, and final perceived achievement and self-concept scores). Using various techniques, no missing values were discovered. No univariate outliers had been detected 211 for change perceived achievement and self-concept variables and initial perceived achievement and self-concept variables, but one univariate outlier was detected for final perceived achievement variable; however, test of normality at .001 (±3SD) statistically significant levels of both Kolmogorov-Smirnov and Shapiro-Wilk were not significant, indicating that final perceived achievement variable was approximately normal. Pairwise linearity had been deemed satisfactory. Multivariate outliers had been screened using computing Mahalanobis distance for each case on the three continuous variables; no case was identified as potential outlier.

Testing for assumptions of ANOVA and ANCOVA. A one-way ANOVA was conducted to explore the impact of participatory instruction on final perceived achievement scores of HSP students in their IP teams, as measured by the IOM Self-

Reported Knowledge Achievement scale. Testing for the assumption of homogeneity of variances (see Appendix M, Table 111) for initial perceived achievement scores, the

Levene statistics for the participatory group was F(8, 31) = 0.92, p = .51. Similarly, testing for the assumption of homogeneity of variances for final perceived achievement scores revealed that the Levene statistics for the participatory group was significant, F(8,

31) = 2.64, p = .03, suggesting that there were unequal variances across the levels of IP teams. This may be due to heterogeneity of the sample, a response shift bias, and the sample size of students in the participatory group (n = 40). Therefore, it may be reasonable to use very small α levels, such as α = .01 (instead of .05), for significance tests of violation of the homogeneity of variance assumption in studies with large sample sizes (Warner, 2013). However, testing for the assumption of homogeneity of variances 212 for initial perceived achievement scores, the Levene statistic for direct instructional group was not significant, F(8, 41) = 1.41, p = .22. This implies that there were equal variances among the IP teams. Similarly, testing for the assumption of homogeneity of variances for final perceived achievement scores revealed that the Levene statistic for the direct instructional group was F(8, 41) = 0.89, p = .53. This implies that the group variances among the IP teams were equal.

The examination of the robust tests of equality of means for the initial perceived achievement scores, the Welch statistics for the participatory group was F(8, 11.18) =

0.96, p = .51. This also implies that the initial perceived achievement means scores were equal. Similar examination was done for the final perceived achievement scores and the results showed that the Welch statistics for the participatory instructional group was F(8,

11.44) = 0.74, p = .66. This suggests that the final perceived achievement means scores of the students were equal. However, the examination of the robust tests of equality of means for the initial perceived achievement scores, the Welch statistic for the direct instructional group was significant, F(8, 16.29) = 2.84, p = .04, indicating that students’ in the direct group with IP teams differed significantly on their average initial perceived achievement scores. Similar examination was done for the final perceived achievement scores and the results showed that the Welch statistic for the direct instructional group was F(8, 16.22) = 2.21, p = .08. This suggests that the final perceived achievement means scores of the students were equal.

Post hoc comparisons using the Tukey HSD test indicated that the mean scores for all the teams were not significantly different from each other (p = .49), suggesting that 213 the teams did not differ significantly from participatory instruction. In fact, the final perceived achievement means was perceived highest by IP team H (M = 30.00), IP team

F (M = 29.25), and IP team G (M = 29.00); and perceived lowest by IP team B (M =

25.17). However, post hoc comparisons using the Tukey HSD test indicated that the mean scores for all the IP teams were not significantly different from each other (p = .30), suggesting that the direct instruction with IP teams did not differ significantly.

Scores on initial perceived achievement (ipAch) and final perceived achievement

(fpAch) had been reasonably normally distributed with no extreme outliers. The scatter plot for ipAch and fpAch indicated a linear relation with no bivariate outliers. Within the participatory group, scores on the ipAch did not differ significantly across groups, F(8,

31) = 1.20, p = .33, 휂2 = .24 ; however, students in IP team B rated scores slightly lower and IP team H rated scores higher on ipAch of the IOM standards.

IOM Self Perceived Achievement Survey Items: Reliability Statistics

The reader should be aware that some of IOM Self Perceived Achievement

Survey items were single items and not constructs, which might change how the reliability was used. Each item made one construct because no other instrument was viable at this time. There was no other previous use of this instrument; this was the first time.

Pre-survey reliability coefficients. As part of measures to ensure instrument reliability, the researcher conducted a Cronbach’s alpha (α) test for reliability of pre- survey. This was done to ensure that the instrument satisfied some basic requirements of statistical reliability. In doing this, the researcher was interested in only the reliability of 214 the five items that comprised of the IOM Self-rated Knowledge and Skills standards of the instrument. The results of the reliability coefficients for the pre-survey are presented in Table 2.

Table 2

Means, Standard Deviations, Cronbach’s Alpha, and Correlations: Pre-Survey (N = 90)

Variables M SD α 1 2 3 4 5

1. Rp 4.80 1.14 .57 - .45** .57** .47** .16

2. Rtw 4.41 0.95 .67 - .40** .23* .07

3. Re 5.08 1.01 .64 - .34** .01

4. Rq 3.44 1.42 .61 - .42*

5. Rinf 1.87 1.02 .72 - ipAch 19.60 3.77 .70 .79** .60** .67** .78** .50**

Note. ** p < .01; Corrected alpha = .00067; * p < .05; Corrected alpha = .003; Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice,

Rq = quality improvement, Rinf = informatics; cpAch = change in overall perceived achievement

Table 2 presents the results of the reliability coefficient for five (5) items on the

IOM standards of the pre-survey of the Self-Reported Knowledge Achievement scale.

From Tables 2, the pre-survey coefficient Cronbach’s alpha estimate for the five items

(overall initial perceived achievement, (ipAch)) was .70. and the coefficients Cronbach’s 215 alpha estimates for each construct item were patient-centered care (Rp), .57; interdisciplinary teamwork (Rtw), .67; evidence-based practice (Re), .64; quality improvement (Rq), .61; and informatics (Rinf), .72. It would appear that instrument has a good reliability for the construct overall perceived achievement.

Post-survey reliability coefficients. The researcher conducted a Cronbach’s alpha (α) test for reliability of post-survey. This was done to ensure that the instrument satisfied some basic requirements of statistical reliability. The results of the reliability coefficients for the post-survey are presented in Table 3.

Table 3

Means, Standard Deviations, Cronbach’s Alpha, and Correlations: Post-Survey (N = 90)

Variables M SD α 1 2 3 4 5

1. Rp1 5.62 0.74 .80 - .57** .58** .58** .46**

2. Rtw1 5.43 0.84 .79 - .64** .55** .49**

3. Re1 5.60 0.87 .80 - .54** .43**

4. Rq1 4.90 0.85 .79 - .55**

5. Rinf1 4.52 1.22 .84 - fpAch 26.08 3.50 .84 .78** .81** .79** .81** .79** Note. ** p < .01; Corrected alpha = .00067; Rp1 = patient-centered care, Rtw1 = interdisciplinary teamwork, Re1 = evidence-based practice, Rq1 = quality improvement,

Rinf1= informatics; cpAch = change in overall perceived achievement.

216

Table 3 presents the results of the reliability coefficient for five (5) items on the

IOM standards of the post-survey of the Self-Reported Knowledge Achievement scale.

As can be seen in Tables 3 the post-survey coefficient Cronbach’s alpha estimate for the five items (overall final perceived achievement (fpAch)) was .84.; and the coefficients

Cronbach’s alpha estimates for each construct item were patient-centered care (Rp1), .80; interdisciplinary teamwork (Rtw1), .79; evidence-based practice (Re1), .80; quality improvement (Rq1), .79; and informatics (Rinf1), .84 respectively. It would appear that instrument has a good reliability for the construct overall perceived achievement.

Demographic Data: Cohort by Instructional Types

The demographic data of the HSP students were examined for the three cohort groups in their instructional methods groups. The results are summarized in Table 4.

Table 4

Demographic Data, by Cohort

Group

Cohort Participatory (n = 40) Direct (n = 50)

Fall 17 (42.5%) 19(38%)

Spring 11(27.5%) 17(34%)

Summer 12(30%) 14(28%)

Table 4 shows demographic data for the two instructional groups with cohorts of the HSP students. From Table 4, of the 40 HSP students in the participatory instructional 217 group, 42.5% of the students were in fall cohort group, 27.5% were in spring cohort group, and 30% were in summer cohort group; however, of the 50 HSP students that were in the direct instructional group, 38% of the students were in fall cohort group, 34% were in spring cohort group, and 28% were in summer cohort group.

Demographic Data: Status by Instructional Types

The demographic data of the HSP students’ status were examined with respect to their instructional methods groups. The results are summarized in Table 5.

Table 5

Frequency Distribution of Demographic Data, by Status

Group

Status Participatory (n = 40) Direct (n = 50)

Undergraduate 8 (20%) 15 (30%)

Graduate 32 (80%) 35(70%)

Table 5 shows demographic data for the two instructional groups with status of the HSP students. From Table 5, of the 40 HSP students in the participatory instructional group, 20% were undergraduate students and 80% graduate students, whereas of the 50

HSP students that were in the direct instructional group, 30% were undergraduates, and

70% were graduate students.

218

Demographic Data: Gender by Instructional Types

The demographic data of the HSP students’ gender were examined with respect to their instructional methods groups. The results are summarized in Table 6.

Table 6

Frequency Distribution of Demographic Data, by Gender

Group

Gender Participatory (n = 40) Direct (n = 50)

Female 31 (77.5%) 42 (84%)

Male 9 (22.5%) 8 (16%)

Table 6 shows demographic data for the two instructional groups with gender of the HSP students. From Table 6, of the 40 HSP students in the participatory instruction group, 31 (77.5%) were female and 22.5% were male, whereas of the 50 HSP students in the direct instructional group, 84% were female and 16% were male.

Demographic Data: Age by Instructional Types

The demographic data of the HSP students’ age were examined with respect to their instructional methods groups. The results are summarized in Table 7.

219

Table 7

Frequency Distribution of Demographic Data, by Age

Group

Age Participatory (n = 40) Direct (n = 50)

Mean 24.28 23.02

SD 3.88 1.99

Age range [20, 38] [20, 31]

Table 7 shows demographic data for the two instructional groups with age groups of the HSP students. From Table 7, the mean age of the 40 HSP students in the participatory group was 24.28 years, (푆퐷 = 3.88; 푟푎푛푔푒 = [20, 38]). However, the mean age of the 50 HSP students in the direct group was 23.02 years, (푆퐷 =

1.99; 푟푎푛푔푒 = [20, 31]).

Demographic Data: Status by Instructional Types

The demographic data of the HSP students’ majors were examined with respect to their instructional methods groups. The results are summarized in Table 8.

220

Table 8

Frequency Distribution of Demographic Data, by Major

Group

Major Participatory (n = 40) Direct (n = 50)

Nursing 4 (10%) 11 (22%)

Medicine 4 (10%) 2 (4%)

Physical Therapy 7 (17.5%) 9 (18%)

Nutrition 4 (10%) 9 (18%)

Speech Language Pathology 10 (25%) 9 (18%)

Social Works 8 (20%) 6 (12%)

Music Therapy 3 (7.5%) 2 (4%)

Audiology 0 (0%) 2 (4%)

Table 8 shows demographic data for the two instructional groups with major of the HSP students. From Table 8,of the 40 HSP students in the participatory instruction group, the compositions of their majors were 10% Nursing, Medicine, and Nutrition,

17.5% Physical Therapy, 25% Speech Language Pathology students, 20% Social Work students, and 7.5%Music Therapy students; however, of the 50 HSP students in the direct instructional group, the compositions of their majors were 22% Nursing students, 18 %

Physical Therapy, Nutrition, Speech Language Pathology, 12% Social Works, and 4%

Medicine, Music Therapy, and Audiology students.

221

Demographic Data: Team Preference by Instructional Types

The demographic data of the HSP students’ team preference were examined with respect to their instructional methods groups. The results are summarized in Table 9.

Table 9

Frequency Distribution of Demographic Data, by Team Preference

Group

Team Preference Participatory (n = 40) Direct (n = 50)

Working in Teams 39 (78%) 24 (60%)

Working Alone 11 (22%) 16 (40%)

Table 9 shows demographic data for the two instructional groups with team preferences of the HSP students. From Table 9, of the 90 HSP students for whom data were retrieved, the final number of students in the participatory group was 40, of which

78% preferred working in teams and 22% preferred working alone, whereas in the direct group, the final number of students was 50, of which 60% preferred working in teams and 40% preferred working alone.

Demographic Data: Inter-Professional Team by Instructional Types

The demographic data of the HSP students’ inter-professional team were examined with respect to their instructional methods groups. The results are summarized in Table 10.

222

Table 10

Frequency Distribution of Demographic Data, by Interdisciplinary (IP) Team

Group

IP teams Participatory (n = 40) Direct (n = 50)

A 6 (15%) 6 (12%)

B 6 (15%) 6 (12%)

C 5 (12.5%) 7 (14%)

D 4 (10%) 6 (12%)

E 3 (7.5%) 6 (12%)

F 4 (10%) 5 (10%)

G 4 (10%) 4 (8%)

H 5(12.5%) 5 (10%)

K 3(7.5%) 5 (10%)

Table 10 shows demographic data for the two instructional groups with IP teams of the HSP students. From Table 10, of the 40 HSP students who were in the participatory group, 78% students preferred working in teams and 22% students preferred working alone; however, of the 50 HSP students in the direct group, 60% students preferred working in teams and 40% students preferred working alone. The students worked in nine IP teams that ranged from three to seven sizes. Of the 40 HSP students in the participatory group, the team sizes were Team A (n = 6), Team B (n = 6), Team C (n

= 5), Team D (n = 4), Team E (n = 3), Team F (n = 4), Team G (n = 4), Team H (n = 5), 223 and Team K (n = 3). However, of the 50 HSP students in the direct group, the team sizes were Team A (n = 6), Team B (n = 6), Team C (n = 7), Team D (n = 6), Team E (n = 6),

Team F (n = 5), Team G (n = 4), Team H (n = 5), and Team K (n = 5).

Research Questions and Findings

Research question 1. The first research question was, “Do HSP students who are taught using a participatory instruction have greater gain on the change in overall perceived achievement scores than the HSP students who are taught using the direct instruction?” In order to answer this question, a hypothesis was formulated and an independent samples t-test was conducted.

Hypothesis 1. The null hypothesis (Ho) states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students taught without using the participatory instructions. (퐻0: 휇푔푝퐴푐ℎ−푝푎푟푡 − 휇푔푝퐴푐ℎ−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for HSP students taught without using participatory instruction. (퐻푎: 휇푔푝퐴푐ℎ−푝푎푟푡 > 휇푔푝퐴푐ℎ−푑푖푟푒푐푡). The null hypothesis was tested using a two-tailed t-test of independent samples. The results are summarized in Table 11. The p-value of 0.05 significance level was used (.025 for each tail).

224

Table 11

Change Scores, Independent Samples T Test Results, and Effect Sizes for Students’ Self-

Concept and Perceived Achievement Due to Instructional Methods

Part(n = 40) Direct(n = 50) 95%CI Cohen’s

Variable M SD M SD t(88) p LB UB d

Rp 1.25 1.13 0.48 0.93 3.55 .001 0.34 1.20 0.74

Rtw 1.70 1.31 0.48 1.04 4.95 .001 0.73 1.71 1.03

Re 0.78 1.10 0.32 0.87 2.20 .031 0.04 0.87 0.46

Rq 1.65 1.61 1.30 1.50 1.06 .290 -0.30 1.00 0.23

Rinf 2.95 1.38 2.42 1.43 1.78 .079 -0.06 1.12 0.38 cpAch 8.32 4.73 5.00 3.63 3.78 .001 1.58 5.08 0.79

Note. Rp = patient-centered care; Rtw = interdisciplinary teamwork; Re = evidence-based practice; Rq = quality improvement; Rinf = informatics; cpAch = change in overall perceived achievement scores

From Table 11, the results of Levene test for equality of variances (p = 0.06) indicate that the t-test assuming equal variances should be interpreted. The results of the t-test assuming equal variances indicated that there is a statistically significant gain between the change in overall perceived achievement scores for the HSP students taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students taught without using the participatory instructions, t(88) = 3.78, p =

.001, d = 0.79, 95% CI [1.58, 5.08]. Therefore, the null hypothesis was rejected in favor 225 of the stated alternative hypothesis. On average, HSP students in the participatory group gain more on the change in overall perceived achievement score (M = 8.32, SD = 4.73) than the change in overall perceived achievement scores of the HSP students who were not taught using participatory instruction (M = 5.00, SD = 3.63). In fact, some of the gains on change perceived self-concept scores were positive and statistically significant, patient-centered care (Rp), t(88) = 3.55, p = .001, d = 0.74, 95% CI [0.34, 1.20]; interdisciplinary teamwork (Rtw), t(88) = 4.95, p = .001, d = 1.03, 95% CI [0.73, 1.71]; and evidence-based practice (Re), t(88) = 2.20, p = .031, d = 0.46, 95% CI [0.04, 0.87].

Others gains on the change perceived self-concept scores were positive and statistically nonsignificant, quality improvement (Rq), t(88) = 1.06, p = .29, d = 0.23, 95% CI [-0.30,

1.00]; and informatics (Rinf), t(88) = 1.78, p = .08, d = 0.38, 95% CI [-0.06, 1.12].

Research question 2. The second research question was, “How do HSP students feel about team preference on a group module project with regard to participatory and direct instructional types?” In order to answer this question, hypotheses 2a, 2b, 2c, and

2d were formulated and an independent samples t-test was conducted for each.

Hypothesis 2a. The null hypothesis (Ho) states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using the participatory instructions.

(퐻01 : 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that gain on the change in overall perceived achievement scores for the HSP students who 226 preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the

HSP students who preferred working in teams taught without using participatory instruction. (퐻푎1: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡). The null hypothesis was tested using a two-tailed t-test of independent samples. The results are summarized in

Table 12. The p-value of 0.05 significance level was used (.025 for each tail).

Table 12

Results of an Independent Samples t-Test for Change Scores of Students’ Self-concept and Achievement Due to Instructional Methods and Working in Teams

Part(n = 24) Direct(n = 39) 95%CI Cohen’s

Variable M SD M SD t(61) p LB UB d

Rp 1.33 1.17 0.49 0.79 3.14a .003 0.30 1.39 0.84

Rtw 1.54 1.18 0.41 1.13 3.92 .001 0.56 1.71 1.00

Re 0.96 0.86 0.26 0.72 3.50 .001 0.30 1.10 0.88

Rq 1.42 1.28 1.41 1.57 0.02 .006 -0.76 0.77 0.01

Rinf 3.12 1.26 2.36 1.48 2.11 .04 0.04 1.49 0.55 cpAch 8.38 4.27 4.92 3.55 3.47 .001 1.46 5.44 0.88

Note. a. df = 36.1; Rp = patient-centered care; Rtw = interdisciplinary teamwork; Re = evidence-based practice; Rq = quality improvement; Rinf = informatics; cpAch = change in overall perceived achievement scores

227

Table 12 presents the results for a two-tailed independent samples t-test. From

Table 12, the results indicated that there was a statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using direct instruction, t(61) = 3.47, p = .001, d = 0.88, 95% CI [1.46, 5.44].

Therefore, the null hypothesis was rejected in favor of the stated alternative hypothesis.

On average, HSP students who preferred working in teams taught using participatory instruction gain more on the change in overall perceived achievement scores (M = 8.38,

SD = 4.27) than the change in overall perceived achievement scores of the HSP students who preferred working in teams taught using direct instruction (M = 4.92, SD = 3.55). In fact, all of the gains on change perceived self-concept scores were positive and statistically significantly large, patient-centered care (Rp), t(36.1) = 3.14, p = .003, d =

0.84, 95% CI [0.30, 1.39]; interdisciplinary teamwork (Rtw), t(61) = 3.92, p = .001, d =

1.00, 95% CI [0.56, 1.71]; and evidence-based practice (Re), t(61) = 3.50, p = .001, d =

0.88, 95% CI [0.30, 0.88], except gains on two of the change perceived self-concept scores that were positive and statistically significantly small and moderate , quality improvement (Rq), t(61) = 0.02, p = .006, d = 0.01, 95% CI [-0.76, 0.77]; and informatics

(Rinf), t(61) = 2.11, p = .04, d = 0.55, 95% CI [0.04, 1.49] respectively.

Hypothesis 2b. The null hypothesis (Ho) states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the 228 change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction.

(퐻02 : 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that gain on the change in overall perceived achievement scores for the HSP students who preferred working alone taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the

HSP students who preferred working alone taught without the using participatory instruction. (퐻푎2: 휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡). The null hypothesis was tested using a two-tailed t-test of independent samples. The results are summarized in Table 13.

The p-value of 0.05 significance level was used (.025 for each tail).

229

Table 13

Results of an Independent Samples t-test for Change Scores of Students’ Self-concept and

Achievement Due to Instructional Methods and Team Preference (Working Alone, n =

27)

Part(n = 16) Direct(n = 11) 95%CI Cohen’s

Variable M SD M SD t(25) p LB UB d

Rp 1.12 1.09 0.45 1.37 1.42 .17 -0.30 1.65 0.54

Rtw 1.94 1.48 0.73 0.91 2.41 .02 0.18 2.25 0.99

Re 0.50 1.37 0.55 1.29 -0.09 .93 -1.12 1.03 -0.04

Rq 2.00 2.00 0.91 1.22 1.61 .12 -0.31 2.49 0.66

Rinf 2.69 1.54 2.64 1.29 0.09 .93 -1.11 1.21 0.04 cpAch 8.25 5.50 5.27 4.05 1.53 .14 -1.03 6.99 0.62

Note. Rp = patient-centered care; Rtw = interdisciplinary teamwork; Re = evidence-based practice; Rq = quality improvement; Rinf = informatics; cpAch = change in overall perceived achievement scores

Table 13 presents the results for independent samples t-test. From Table 13, it can be seen that there was no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction, t(25) = 1.53, p = .14, d = 0.62, 95% CI [-1.03, 6.99]. On average, HSP 230 students who preferred working alone taught using participatory instruction gain more on the change in overall perceived achievement scores (M = 8.25, SD = 5.50) than the change in overall perceived achievement scores of the HSP students who preferred working alone taught without the using participatory instruction (M = 4.92, SD = 3.55).

In fact, only one of the gains on change perceived self-concept score was positive and statistically significantly large, interdisciplinary teamwork (Rtw), t(25) = 2.41, p = .02, d

= 0.99, 95% CI [0.18, 2.25]; several of the gains on change perceived self-concept scores were positive and statistically not significant, patient-centered care (Rp), t(25) = 1.42, p =

.17, d = 0.54, 95% CI [-0.30, 1.65]; quality improvement (Rq), t(25) = 1.61, p = .12, d =

0.66, 95% CI [-0.31, 2.49]; and informatics (Rinf), t(25) = 0.09, p = .93, d = 0.04, 95%

CI [-1.11, 1.21]; however, interestingly there was a negative and statistically non- significant gain on change perceived self-concept score, evidence-based practice (Re), t(25) = -0.09, p = .93, d = -0.04, 95% CI [-1.12, 1.03].

Hypothesis 2c. The null hypothesis (Ho) states that there was no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction. (퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 −

휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡 = 0). The alternative hypothesis (Ha) states that the gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who 231 preferred working alone taught using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푝푎푟푡 >

휇푐푝퐴푐ℎ푊푘푎푙−푝푎푟푡). The null hypothesis was tested using a two-tailed t-test of independent samples. The results are summarized in Table 14. The p-value of 0.05 significance level was used (.025 for each tail).

Table 14

Comparison of Students’ Overall Post-Perceived Achievement Scores, by Team

Preference (Working in Teams and Working Alone) and Participatory Instructional Type

Wktm(n = 24) Wkaln(n = 16) 95%CI Cohen’s

Part M SD M SD t(38) p LB UB d

Rp 1.33 1.17 1.12 1.09 0.57 .57 -0.53 0.95 0.19

Rtw 1.54 1.18 1.94 1.48 -0.84 .35 -1.25 0.46 -0.30

Re 0.96 0.86 0.50 1.37 1.19a .25 -0.34 1.25 0.40

Rq 1.42 1.28 2.00 2.00 -1.03b .31 -1.75 0.58 -0.35

Rinf 3.12 1.26 2.69 1.54 0.98 .33 -0.46 1.34 0.31 cpAch 8.38 4.27 8.25 5.50 0.08 .94 -3.01 3.26 0.03

Note. a. df = 22.9; b df = 23.2; Rp = patient-centered care; Rtw = interdisciplinary teamwork; Re = evidence-based practice; Rq = quality improvement; Rinf = informatics; cpAch = change in overall perceived achievement scores

Table 14 presents the results for independent samples t-test. From Table 14, the results indicated that there was no statistically significant gain between the change in 232 overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction, t(38) = 0.98, p = .33, d = 0.03, 95% CI [-3.01, 3.26]. This suggests that working in team preference might have helped the students in the participatory group; this might lead to increasing student’s self-efficacy and then increasing student’s final perceived achievement On average, HSP students who preferred working in teams taught using participatory instruction gain more on the change in overall perceived achievement scores (M = 8.38, SD = 4.27) than the change in overall perceived achievement scores of the HSP students who preferred working alone taught the using participatory instruction (M = 8.25, SD = 5.50). As a matter of fact, none of the gains on change perceived self-concept score was statistically significantly; however, there were some positive gains on change perceived self-concept scores, patient-centered care (Rp), t(38) = 0.57, p = .57, d = 0.19, 95% CI [-0.53, 0.95]; evidence-based practice

(Re), t(22.9) = 1.19, p = .25, d = 0.40, 95% CI [-0.34, 1.25]; and informatics (Rinf), t(38)

= 0.98, p = .33, d = 0.31, 95% CI [-0.46, 1.34]; and some negative gains on change perceived self-concept scores, interdisciplinary teamwork (Rtw), t(38) = -0.84, p = .35, d

= -0.30, 95% CI [-1.25, 0.46]; and quality improvement (Rq), t(23.2) = -1.03, p = .31 d =

-0.35, 95% CI [-1.75, 0.58].

Hypothesis 2d. The null hypothesis (Ho) states that there was no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction 233 and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction.

퐻0: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 − 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that the gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction will be statistically significantly greater than will the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using participatory instruction. (퐻푎: 휇푐푝퐴푐ℎ푊푘푡푚−푑푖푟푒푐푡 > 휇푐푝퐴푐ℎ푊푘푎푙−푑푖푟푒푐푡 ). The null hypothesis was tested using a two-tailed t-test of independent samples. The results are summarized in Table 15. The p-value of 0.05 significance level was used (.025 for each tail).

234

Table 15

Comparison of Students’ Overall Perceived Achievement Change Scores, by Team

Preference (Working in Teams and Working Alone) and Direct Instructional Type

Wktm(n = 39) Wkln(n = 11) 95%CI Cohen’s

Direct M SD M SD t(48) p LB UB d

Rp 0.49 0.79 0.45 1.37 0.08a .94 -0.91 0.97 0.04

Rtw 0.41 1.13 0.73 0.91 -0.90 .38 -1.03 0.40 -0.31

Re 0.26 0.72 0.55 1.29 -0.71b .49 -1.18 0.60 -0.28

Rq 1.41 1.57 0.91 1.22 0.98 .33 0.53 1.53 0.36

Rinf 2.36 1.48 2.64 1.29 -0.56 .58 -1.27 0.71 -0.20 cpAch 4.92 3.55 5.27 4.05 -0.28 .78 -2.86 2.16 -0.09 a. df =11.9; b df = 11.8; Rp = patient-centered care; Rtw = interdisciplinary teamwork;

Re = evidence-based practice; Rq = quality improvement; Rinf = informatics; cpAch = change in overall perceived achievement scores

Table 15 presents the results of a two-tailed t-test of independent samples. From

Table 15, the results indicated that there was no is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction, t(48) = -0.28, p = .78, d

= -0.09, 95% CI [-2.86, 2.16]. This suggests that working alone preference might have 235 helped the students in the direct group, this might lead to increasing student’s self- efficacy, and then increasing student’s final perceived achievement. On average, HSP students who preferred working alone (Wkaln) taught without using participatory instruction gain more on the change in overall perceived achievement score (M = 5.27,

SD = 4.05) than the change in overall perceived achievement score of the HSP students who preferred working in teams (Wktm) taught without using participatory instruction

(M = 4.92, SD = 3.55). As a matter of fact, none of the gains on change perceived self- concept scores was statistically significantly; however, there were some positive gains on change perceived self-concept scores, patient-centered care (Rp), t(11.9) = 0.08, p = .94, d = 0.04, 95% CI [-0.91, 0.97]; and quality improvement (Rq), t(48) = 0.98, p = .33 d =

0.36, 95% CI [0.53, 1.53]; and some negative gains on change perceived self-concept scores, interdisciplinary teamwork (Rtw), t(48) = -0.90, p = .38, d = -0.31, 95% CI [-1.03,

0.40]; evidence-based practice (Re), t(11.8) = -0.71, p = .49, d = -0.28, 95% CI [-1.18,

0.60]; and informatics (Rinf), t(48) = -0.56, p = .58, d = -0.20, 95% CI [-1.27, 0.71].

Research question 3. The third research question was, “How does the participatory instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated, and one-way ANOVA and ANCOVA were conducted.

Hypothesis 3. The null hypothesis (Ho) states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction and the final perceived 236 achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0). The alternative hypothesis (Ha) states that the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡). The null hypothesis was tested using one-way ANOVA and univariate ANCOVA through Type I Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score (fpAch) was the dependent variable. The means and standard deviations of the initial perceived achievement scores by majors, the observed and adjusted means of the final perceived achievement scores by majors are presented in Table 16.

237

Table 16

Initial Perceived Achievement Means and Final Perceived Achievement Means by

Participatory Group (n = 40) and Major

ipAch fpAch

Major M SD Unadjusted M Adjusted M SD

BSN 20.25 4.57 27.75 27.48 2.99

PT 22.14 1.35 27.71 26.98 2.14

NUT 21.50 5.32 26.00 25.42 2.71

SLP 15.70 2.67 26.00 26.84 4.55

SW 19.38 4.10 28.12 28.06 4.00

Others 18.71 6.37 29.14 29.24 2.73

ALL 19.12 4.51 27.45 27.34 3.37

Note. BSN = Nursing, PT = Physical Therapy, NUT = Nutrition, SLP = Speech

Language Therapy, SW = Social Works, Others = Medicine, Music Therapy, and

Audiology

Table 16 presents initial perceived achievement scores and adjusted and unadjusted majors’ means for the final perceived achievement scores. From Table 16, the rank ordering to some of the major’s means did not change for all, except NUT.

However, after adjustment, both unadjusted and adjusted grand means remained the same for the participatory instruction group (24.98). After adjustment for initial perceived achievement score, within the participatory instructional group the students’ major that 238 rated final perceived achievement higher ("Others”; 푀 = 29.24) and lower (NUT; M =

25.42). In fact, the final perceived achievement means was perceived highest by

“Others” (M = 27.45), and perceived lowest by NUT (M = 26.00).

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the ANCOVA are presented in Table

17.

Table 17

Effect of Participatory Instruction on Final Perceived Achievement Scores (ANOVA), by

Major

Sources SS df MS F p 휂2

Between Group 53.99 5 10.80 0.95 .46 .12

Within Group 387.91 34 11.41

Total 441.90 39

Table 17 presents a one-way ANOVA summary result. From Table 17, for the participatory group, there was no statistically significant difference in the final perceived achievement scores for the six majors, F (5, 34) = 0.95, p = .46, ƞ̂ 2 = .12. 239

The null hypothesis was tested using univariate ANCOVA through Type III

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived

achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the interaction between major and

covariate through SPSS GLM ANCOVA Type III SS with a custom model are presented

in Tables 18.

Table 18

Interaction between Covariate and Major Using GLM -ANCOVA Type III SS, by

Participatory Group

Sources Type III df MS F p 휂̂ 2

SS ipAch 7.21 1 7.21 0.64 .43 .02

Major 33.33 5 6.67 0.59 .71 .10

Major*ipAch 35.44 5 7.09 0.63 .68 .10

Error 317.73 28 11.35

Total 441.90 39

a. R Squared = .281 (Adjusted R Squared = -.001)

Table 18 presents the results of interaction between major and covariate through

SPSS GLM ANCOVA Type III with a custom model. Table 18 shows that the

interaction of ipAch * Major was not statistically significant: for participatory instruction, 240

F(5, 28) = .63, p = .68, 휂̂ 2 = .10. This indicated no significant violation of the homogeneity of regression (or no treatment-by-covariate interaction) assumption.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the differences between the adjusted and unadjusted means on the final perceived achievement by HSP students major after controlling for the covariate through GLM ANCOVA Type I SS with custom model are presented in Table 19.

Table 19

Effect of Participatory Instruction on Final Perceived Achievement Scores, Controlling for Initial Perceived Achievement Scores, by Major

Sources Type I SS df MS F p 휂̂ 2 ipAch 41.58 1 41.58 3.89 .06 .11

Major 47.15 5 9.43 0.88 .51 .12

Error 353.17 33 10.70

Total 441.90 39

Table 19 shows the result of the effect of participatory instruction with majors on

HSP students’ final perceived achievement scores after controlling initial perceived achievement scores. From Table 19, the main effect of participatory instruction with 241 majors in the ANCOVA was not statistically significant, F(5, 33) = 0.88, p = .51,

휂̂ 2 = .12, when controlling for ipAch. This result suggested that participatory instruction with majors uniquely explained 12% of final perceived achievement score.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of parameter estimates of final achievement scores by majors are presented in Table 20.

Table 20

Parameter Estimates of Final Perceived Achievement Scores, by Participatory Group and Major

95% CI Parameter B SE t p LB UB 휂̂ 2 intercept 24.58 2.82 8.72 .001 18.84 30.31 .70 ipAch 0.24 0.14 1.80 .081 -0.03 0.52 .09 BSN -1.77 2.06 -0.86 .397 -5.96 2.43 .02 PT -2.27 1.81 -1.25 .219 -5.95 1.42 .05 NUT -3.82 2.09 -1.83 .076 -8.06 0.42 .09 SLP -2.41 1.66 -1.45 .16 -5.79 0.98 .06 SW -1.18 1.70 -0.70 .492 -4.63 2.27 .01 Others 0 Note. a = This parameter is set to zero because it is redundant; BSN = Nursing, PT =

Physical Therapy, NUT = Nutrition, SLP = Speech Language Therapy, SW = Social

Works, Others = Medicine, Music Therapy, and Audiology 242

Table 20 presents the parameter estimate of final perceived achievement scores by students’ majors. From Table 20, within the participatory group, the mean fpAch scores did not differ significantly across levels of students majors when the covariate ipAch was statistically controlled, F(5, 33) = 0.88, p = .51, 휂̂ 2 = .12. In fact, after adjustment for the ipAch covariate, the difference between mean fpAch scores for BSN group was 1.77 points lower than for the “Others” group, and this difference was not statistically significant, t(33) = -0.86, p = .40, 휂̂ 2 = .02, 95% CI = [-5.46, 2.43]. In addition, after adjustment for the ipAch covariate, the difference between mean fpAch scores for PT group was 2.27 points lower than for the “Others” group, and this difference was not statistically significant, t(33) = -1.25, p = .22, 휂̂ 2 = .05, 95% CI = [-5.95, 1.42]. Further, after adjustment for the ipAch covariate, the difference between mean fpAch scores for

NUT group was 3.82 points lower than for the “Others” group, and this difference was not statistically significant, t(33) = -1.83, p = .08, 휂̂ 2 = .09, 95% CI = [-8.06, 0.42].

After adjustment for the ipAch covariate, the difference between mean fpAch scores for

SLP group was 2.41 points lower than for the “Others” group, and this difference was not statistically significant, t(33) = -1.45, p = .52, 휂̂ 2 = .06, 95% CI = [-5.79, 0.98]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for SW group was 1.18 points lower than for the “Others” group, and this difference was not statistically significant, t(33) = -0.70, p = .49, 휂̂ 2 = .01, 95% CI = [-4.63, 2.27].

Research question 4. The fourth research question was, “How does the direct instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement 243 scores?” In order to answer this question, a hypothesis was formulated, and one-way

ANOVA and ANCOVA were conducted.

Hypothesis 4. The null hypothesis (Ho) states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a direct instruction and the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 −

휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that the final perceived achievement scores for the HSP students from various majors taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various majors taught using the direct instructions controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡). The null hypothesis was tested using

ANCOVA through Type I Method of Sum of Squares in GLM (a hierarchical approach).

The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score (fpAch) was the dependent variable. The means and standard deviations of the initial perceived achievement scores by majors, the observed and adjusted means of the final perceived achievement scores by majors are presented in

Table 21.

244

Table 21

Initial Perceived Achievement Means and Final Perceived Achievement Means, by Direct

Group and Major

ipAch fpAch

Major M SD Unadjusted M Adjusted M SD

BSN 21.09 4.32 25.09 24.59 3.05

PT 19.89 2.03 25.56 25.60 3.71

NUT 19.33 2.74 24.22 24.51 2.73

SLP 19.00 2.18 26.00 26.44 3.71

SW 20.17 3.43 23.00 22.92 3.03

Others 20.33 3.08 25.50 25.34 4.51

ALL 19.98 3.03 24.98 24.90 3.39

Note. BSN = Nursing, PT = Physical Therapy, NUT = Nutrition, SLP = Speech Language

Therapy, SW = Social Works, Others = Medicine, Music Therapy, and Audiology

Table 21 presents initial perceived achievement scores, and adjusted and unadjusted IP team means for the final perceived achievement scores. From Table 21, the rank ordering to some of the majors’ means did not change by adjustment for al majors’ in the direct instructional group. However, after adjustment, both unadjusted and adjusted grand means remained almost the same for the direct instruction group (24.98 and 24.90). After adjustment for initial perceived achievement scores, within the direct instructional group the students’ major that rated final perceived achievement higher 245

(SLP; 푀 = 26.44) and lower (SW; M = 22.92). In fact, the final perceived achievement means was perceived highest by SLP (M = 26.00), and perceived lowest by SW (M =

23.00).

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the ANCOVA are presented in Table

22.

Table 22

Effect of Direct Instruction on Final Perceived Achievement Scores (ANOVA), by Major

Sources SS df MS F p 휂2

Between Groups 42.79 5 8.56 0.72 .61 .08

Within Groups 520.19 44 11.82

Total 562.98 49

Table 22 presents a one-way ANOVA summary result. From Table 22, for the direct group, there was no statistically significant difference in the final perceived achievement scores for the six majors, F(5, 44) = 0.72, p = .61, ƞ̂ 2 = .08.

The null hypothesis was tested using univariate ANCOVA through Type III

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score 246

(fpAch) was the dependent variable. The results of the interaction between major and

covariate through SPSS GLM ANCOVA Type III with a custom model are presented in

Tables 23.

Table 23

Interaction between Covariate and Major Using GLM -ANCOVA Type III SS, by Direct

Group

Sources Type III SS df MS F p 휂̂ 2 ipAch 89.58 1 89.58 8.33 .01 .18

Major 20.63 5 4.13 0.38 .86 .05

Major*ipAch 26.64 5 5.33 0.50 .78 .06

Error 4.08.58 38 10.75

Total 562.98 49

Note. a.= R Squared = .274 (Adjusted R Squared = .064).

Table 23 presents the results of interaction between major and covariate through

SPSS GLM ANCOVA Type III with a custom model. Table 23 shows that the

interaction of ipAch * Major was not statistically significant: for the direct instruction,

F(5, 38) = 0.50, p = .78, 휂̂ 2 = .06. This indicated no significant violation of the

homogeneity of regression (or no treatment-by-covariate interaction) assumption.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived 247 achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the differences between the adjusted and unadjusted means on the final perceived achievement by HSP students major after controlling for the covariate through GLM ANCOVA Type I SS with custom model are presented in Table 24.

Table 24

Effect of Direct Instruction on Final Perceived Achievement Scores, Controlling for

Initial Perceived Achievement Scores, by Major

Sources Type I SS df MS F p 휂̂ 2 ipAch 75.87 1 75.87 7.50 .01 .15 major 51.89 5 10.38 1.03 .42 .11

Error 435.22 43 10.12

Total 562.98 49

Table 24 shows the result of the effect of direct instruction with majors on HSP students’ final perceived achievement scores after controlling initial perceived achievement scores. From Table 24, the main effect of direct instruction with majors in the ANCOVA was not statistically significant, F(5, 43) = 1.03, p = .42, 휂̂ 2 = .11, when controlling for ipAch. This result suggested that direct instruction with majors uniquely explained 11% of final perceived achievement score. 248

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of parameter estimates of final achievement scores by majors are presented in Table 25.

Table 25

Parameter Estimates of Final Perceived Achievement Score, by Direct Group and Major

95% CI

Parameter B SE t p LB UB 휂̂ 2 intercept 16.40 3.40 4.82 .000 9.54 23.25 .351 ipAch 0.45 0.16 2.90 .006 0.14 0.76 .163

BSN -0.75 1.62 -0.46 .646 -4.01 2.52 .005

PT 0.26 1.68 0.15 .880 -3.13 3.64 .001

NUT -0.83 1.68 -0.49 .625 -4.23 2.57 .006

SLP 1.10 1.69 0.65 .520 -2.31 4.50 .010

SW -2.43 1.84 -1.32 .194 -6.13 1.28 .039

Others 0

Note. a = This parameter is set to zero because it is redundant; BSN = Nursing, PT =

Physical Therapy, NUT = Nutrition, SLP = Speech Language Therapy, SW = Social

Works, Others = Medicine, Music Therapy, and Audiology

249

Table 25 presents the parameter estimate of final perceived achievement (fpAch) scores by students’ majors. From Table 25, within the direct group, the mean fpAch scores did not differ significantly across levels of students majors when the covariate initial perceived achievement (ipAch) score was statistically significant, F(5, 43) = 1.03, p = .42, 휂̂ 2 = .11. In fact, after adjustment for the ipAch covariate, the difference between mean fpAch scores for BSN group was 0.75 points lower than for the “Others” group, and this difference was not statistically significant, t(43) = -0.46, p = .65, 휂̂2 =

.01, 95% CI = [-4.01, 2.52]. In addition, after adjustment for the ipAch covariate, the difference between mean fpAch scores for PT group was 0.26 points lower than for the

“Others” group, and this difference was not statistically significant, t(43) = 0.15, p = .88,

휂̂ 2 = .001, 95% CI = [-3.13, 3.64]. Further, after adjustment for the ipAch covariate, the difference between mean fpAch scores for NUT group was 0.83 points lower than for the

“Others” group, and this difference was not statistically significant, t(43) = -0.49, p = .63,

휂̂ 2 = .01, 95% CI = [-4.23, 2.57]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for SLP group was 1.10 points lower than for the

“Others” group, and this difference was not statistically significant, t(43) = 0.65, p = .52,

휂̂ 2 = .01, 95% CI = [-2.31, 4.50]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for SW group was 2.43 points lower than for the

“Others” group, and this difference was not statistically significant, t(43) = -1.32, p = .19,

휂̂ 2 = .04, 95% CI = [-6.13, 1.28].

Research question 5. The fifth research question was, “What instructional strategy provides significant instructional impact on HSP students’ final perceived 250 achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated and partial correlation and hierarchical multiple regression were conducted.

Hypothesis 5. The null hypothesis (Ho) states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

The alternative hypothesis (Ha) states that the final perceived achievement score for group of HSP students from various majors taught using participatory instruction will have positive and statistically significantly impact on the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores but for their majors.

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The correlations of fpAch and majors, and fpAch and instructional type are presented in Table 26.

251

Table 26

Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on fpAch, Major, Instructional Types, and ipAch

Measure 1 2 3 4 5 6 7 8 9 10

1. fpAch 1

2. dBSN .00 1

3. dPT .07 -.20 1

4. dNUT -.11 -.18 -.19 1

5. dSLP .00 -.22* -.24* -.21 1

6. dSW -.02 -.18 -.20 -.17 -.21* 1

7. dOther .20 -.17 -.18 -.16 -.20 -.17 1

8. trtD -.28* .22* -.02 .08 -.05 -.09 -.04 1

9. trtP .39** -.13 .08 -.04 -.04 .12 .09 -.93** 1

10. ipAch .27* .24* .18 .11 -.25* .09 .11 .24* .13 1

M 26.08 3.48 3.71 2.89 3.64 3.07 2.81 11.10 8.50 19.60

SD 3.58 8.00 8.07 7.20 7.21 7.32 7.12 10.23 10.0 3.77

Note. (N = 90); * p < .05 (2-tailed); **p< .001; Corrected alpha= .008

Table 26 presents correlation coefficients between fpAch and major levels without controlling for ipAch scores. From Table 26, the results indicated that the correlation between fpAch and participatory instruction was .39 which was considerably more than 252 the partial correlation when the effect of ipAch was controlled for (r = .37) (see Table

27).

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The partial correlations of fpAch and majors, and fpAch and instructional type are presented in Table 27.

Table 27

Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on fpAch, Major, and Instructional Types

Measure 1 2 3 4 5 6 7 8 9

1. fpAch 1

2. dBSN -.07 1

3. dPT .02 -.26* 1

4. dNUT -.15 -.21* -.21* 1

5. dSLP .07 -.17 -.20 -.19 1

6. dSW -.05 -.21* -.22* -.18 -.20 1

7. dOther .18 -.21 -.21 -.17 -.18 -.18 1

8. trtD -.37** .17 -.06 .06 .01 -.11 -.07 1

9. trtP .37** -.17 .06 -.06 -.01 .11 .07 -1.0** 1

M 26.08 3.48 3.71 2.89 3.64 3.07 2.81 11.10 8.50 SD 3.58 8.00 8.07 7.20 7.21 7.32 7.12 10.23 10.01 Note. (N = 90); * p < .05 (2-tailed); **p< .001; Corrected alpha= .008 253

Table 27 presents partial correlation coefficients between fpAch and major levels, controlling for ipAch scores. From Table 27, the results indicated that the partial correlation between fpAch and participatory instruction was .37 which was considerably less than the correlation when the effect of ipAch was not controlled for (r = .39) (see

Table 26).

In fact, from Tables 26 and 27, the results show that the correlation coefficient was almost the same as what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between fpAch and participatory instruction increased. In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction now shared 14% of the variance in fpAch (compared to 15% when ipAch was not controlled). However, the partial correlation between fpAch and direct instruction was -.37 which was considerably greater than the correlation when the effect of ipAch was not controlled for (r = -.28). In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between fpAch and direct instruction decreased. In terms of variance, the value of R2 for the partial correlation was .14, which means that direct instruction shared

14% of the variance in fpAch (compared to 8% when ipAch was not controlled).

Examining the correlation coefficients (r = .01) and partial correlation coefficients

(rp = .07) of the levels of major dummy variables, they were very close to 0 and non- significant. The results suggest that levels of major had no influence on final perceived achievement score. In terms of variance, the value of R2 for the partial correlations were 254 nearly .01 and non-significant, suggesting that levels of major shared nearly 0% of the variance in fpAch as compared to nearly 0% when ipAch was not controlled. This suggests that levels of major had no influence on final perceived achievement score.

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The summary of hierarchical multiple regression analysis for the relationship between fpAch and major, and fpAch and instructional types with ipAch are presented in Table 28.

255

Table 28

Summary of Hierarchical Regression Analysis for Variables ipAch, fpAch, and Major as a Function of Instruction

Variable Β t 푠푟2 R 푅2 Δ푅2

Step 1 .27 .07 .07 ipAch .25 2.60* .07

Step 2 .35 .12 .05 ipAch .29 2.67** .08 dBSN-ipAch -.03 -0.54 .00 dNUT-ipAch -.07 -1.05 .01 dSLP-ipAch .03 0.39 .00 dSW-ipAch -.03 -0.40 .00 dOther-ipAch .07 1.08 .01

Step 3 .49 .24 .12 ipAch .23 2.26* .05 dBSN-ipAch -.00 -0.04 .00 dNUT-ipAch -.05 -0.84 .01 dSLP-ipAch .03 0.55 .00 dSW-ipAch -.03 -0.57 .00 dOther-ipAch .07 1.10 .01 trtP-ipAch .13 3.52*** .12

Note. N = 90; * p < .05, ** p < .01, ***p< .001. 256

Table 28 shows the results of hierarchical regression analysis for dummy variables of major predicting final perceived achievement. From Table 28, the hierarchical multiple regression revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .01 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing the major dummy variables explained an additional 5% of variation in final perceived achievement and this change in 푅2 was not significant, F(5, 83) = 1.00, p = .43. In Step 3, adding instructional types to the regression model explained an additional 12% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 82) = 12.37, p

= .001. When all three independent variables were included in Step 3 of the regression model, none of the major variables was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 12% of the variation in final perceived achievement. Together the three independent variables accounted for 24% of the variance in final perceived achievement.

Research question 6. The sixth research question was, “What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated, and partial correlation and hierarchical multiple regression were conducted. 257

Hypothesis 6. The null hypothesis (Ho) states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. The alternative hypothesis (Ha) states that the final perceived achievement score for group of HSP students from various team preferences taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The correlations of fpAch and team preference, and fpAch and instructional type are presented in Table 29.

258

Table 29

Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a function of Instruction, Team Preference (N = 90)

Measure 1 2 3 4 5 6

1. fpAch 1

2. dWkm .01 1

3. dWka .01 -.92** 1

4. trtD -.28* .25* -.16 1

5. trtP .39** -.15 .20 -.93** 1

6. ipAch .27* .28* .12 .24* .13 1

M 26.08 13.74 5.86 11.10 8.50

SD 3.58 9.56 9.25 10.23 10.01

Note. (N = 90); * p < .05 (2-tailed); **p< .001.

Table 29 present correlation coefficients between fpAch and team preference levels, without controlling for ipAch scores. From Table 29, the results indicated that the correlation between fpAch and participatory instruction was .39 which was considerably more than the partial correlation when the effect of ipAch was controlled for (r = .37) (see

Table 30).

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The partial correlations of fpAch and team preference, and fpAch and instructional type are presented in Table 30. 259

Table 30

Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a function of Instruction, Team Preference (N = 90)

Measure 1 2 3 4 5

1. fpAch 1

2. dWkm -.07 1

3. dWka .07 -1.00** 1

4. trtD -.37** .19 -.19 1

5. trtP .37** -.19 .19 -1.00** 1

M 26.08 13.74 5.86 11.10 8.50

SD 3.58 9.56 9.25 10.23 10.01

Note. (N = 90); * p < .05 (2-tailed); **p< .001.

Table 30 presents partial correlation coefficients between fpAch and team preference levels, controlling for ipAch scores. From Table 30, the results indicated that the partial correlation between fpAch and participatory instruction was .37 which was considerably less than the correlation when the effect of ipAch was not controlled for (r =

.39) (see Table 29).

In fact, from Tables 29 and 30, the results show that the correlation coefficient was almost the same as what it was before. Although this correlation was still 260 statistically significant (its p-value was still below .05), the relationship between fpAch and participatory instruction increased. In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction now shared 14% of the variance in fpAch (compared to 15% when ipAch was not controlled). However, the partial correlation between fpAch and direct instruction was -.37 which was considerably greater than the correlation when the effect of ipAch was not controlled for (r = -.28). In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between fpAch and direct instruction decreased. In terms of variance, the value of R2 for the partial correlation was .14, which means that direct instruction shared

14% of the variance in fpAch (compared to 8% when ipAch was not controlled).

Examining the correlation coefficients (r = .01) and partial correlation coefficients

(rp = .07) of the levels of team preferences, they were very close to 0 and non-significant, suggesting that working in teams and working alone had no influence on final perceived achievement score. In terms of variance, the value of R2 for the partial correlation was

.0049, suggesting that levels of team preference shared nearly 0% of the variance in fpAch (compared to nearly 0% when ipAch was not controlled), suggesting that levels of team preference had no influence on the final perceived achievement score.

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The summary of hierarchical multiple regression analysis for the relationship between fpAch and team preference, and fpAch and instructional types with ipAch are presented in Table 31. 261

Table 31

Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived

Achievement, by Team Preference (N = 90)

Variable Β t 푠푟2 R 푅2 Δ푅2

Step 1 .27 .07 .07 ipAch .25 2.60* .07

Step 2 .28 .08 .01 ipAch .25 2.49* .26 dWka-ipAch .03 0.65 .00

Step 3 .44 .20 .12 ipAch .21 2.26* .05 dWka-ipAch .00 -0.01 .00 trtP-ipAch .13 3.60 .12

Note. N = 90; * p < .05, ** p < .01, *** p < .001

Table 31 shows the results of hierarchical regression analysis for dummy variables of major predicting final perceived achievement. From Table 31, the hierarchical multiple regression revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .01 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing the team preference dummy variables explained an additional 8% of variation in final perceived achievement and this change in 푅2 was not significant, F(1, 87) = 0.43, p = .52. In Step 3, 262 adding instructional types to the regression model explained an additional .5% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 86) =

12.99, p = .001. When all three independent variables were included in stage three of the regression model, none of the team preference variables was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 12% of the variation in final perceived achievement. Together the three independent variables accounted for 20% of the variance in final perceived achievement.

Research question 7. The seventh research question was, “How does a participatory instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated and ANCOVA were conducted.

Hypothesis 7. The null hypothesis (Ho) states that there is statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 − 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡 = 0). The alternative hypothesis (Ha) states that the final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various IP teams taught 263 using the participatory instructions, controlling for their initial perceived achievement scores. (퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푝푎푟푡 > 휇푎푑푗푓푝퐴푐ℎ−푝푎푟푡). The null hypothesis was tested using

ANCOVA through Type I Method of Sum of Squares in GLM (a hierarchical approach).

The means and standard deviations of the initial perceived achievement scores by IP teams, the observed and adjusted means of the final perceived achievement scores by IP teams are presented in Table 32.

Table 32

Initial Perceived Achievement Means and Final Perceived Achievement Means, by

Participatory Group and IP Team

ipAch fpAch

Team (n) M SD Unadjusted M Adjusted M SD

A (6) 20.33 3.08 26.67 26.31 1.63

B (6) 21.00 3.10 25.17 24.62 4.45

C (5) 17.80 6.46 27.60 27.99 2.19

D (4 ) 19.75 4.57 26.75 26.57 4.03

E (3 ) 15.00 5.00 26.33 27.54 1.53

F (4 ) 22.25 5.38 29.25 28.34 2.87

G (4 ) 19.75 4.57 29.00 28.82 2.94

H (5 ) 18.20 3.77 30.00 30.27 4.90

K (3 ) 15.00 3.61 26.67 27.87 2.52

Total (40) 19.12 4.51 27.45 27.59 3.37 264

Table 32 presents initial perceived achievement scores, and adjusted and unadjusted IP team means for the final perceived achievement scores. Table 32 shows that the rank ordering to some of the IP team means was not changed by adjustment for the covariate for IP teams B, C and H in the participatory instructional group. However, after adjustment, the grand means were slightly increased for the participatory instruction group from 27.45 to 27.59. Nevertheless, the F for the main effect of participatory instruction in the ANCOVA (1.39) was larger than the F when initial perceived achievement was not statistically controlled (1.10), because controlling for the variance associated with initial perceived achievement substantially reduced the size of the within- group error variance. After adjustment for initial perceived achievement scores, within the participatory instructional group, the students’ IP team that rated final perceived achievement higher (team H;푀 = 30.27) and lower (team B; M = 24.62).

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the ANCOVA are presented in Table

33.

265

Table 33

Effect of Participatory Instruction on Final Perceived Achievement Mean, by Inter-

Profession Teams (ANOVA)

Source SS df MS F p 휂2

Between group 97.70 8 12.21 1.10 .39 .22

Within group 344.20 31 11.10

Total 441.90 39

Table 33 presents the results for one-way ANOVA by IP teams. From Table 33, there was no statistically significant difference in the final perceived achievement scores for the participatory group in their IP teams, F(8, 31) = 1.10, p = 0.39, 휂2 = .22.

The null hypothesis was tested using univariate ANCOVA through Type III

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the interaction between IP team and covariate through SPSS GLM ANCOVA Type III SS with a custom model are presented in Tables 34.

266

Table 34

Interaction between Covariate and Inter-professional Team Using GLM -ANCOVA Type

III SS, by Participatory Group

Sources Type III SS df MS Fa p 휂̂ 2 ipAch 32.91 1 32.91 2.98 .10 .12

IP team 52.14 8 6.52 0.59 .78 .18

IP team*ipAch 48.83 8 6.10 0.55 .81 .17

Error 243.33 22 11.06

Total 441.90 39

a. R Squared = .449 (Adjusted R Squared = .024)

Table 34 presents the results of the interaction between covariate and IP team

through SPSS GLM ANCOVA with a custom model Type III Sum of Squares. From

Table 34, the interaction ipAch * IP team with the participatory instruction was not

statistically significant, F(8, 28) = 0.55, p = .81, 휂̂ 2 = .17. This suggested that there was

no significant violation of the homogeneity of regression assumption.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived

achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the differences between the adjusted

and unadjusted means on the final perceived achievement by HSP students IP team after 267 controlling for the covariate through GLM ANCOVA Type I SS with custom model are presented in Table 35.

Table 35

Effect of Participatory Instruction on Final Perceived Achievement Scores, Controlling for Initial Perceived Achievement Scores, by IP Team

Sources Type I SS df MS F p 휂̂ 2 ipAch 41.58 1 41.58 4.27 .048 .13

IP Team 108.16 8 13.52 1.39 .24 .27

Error 292.16 30 9.74

Total 441.90 39

Table 35 presents the results for ANCOVA Type I SS by IP teams. From Table

35, within the participatory group, the mean fpAch scores did not differ significantly across levels of students’ IP teams, F(8, 30) = 1.39, p = .24, 휂̂ 2 = .27 when the covariate ipAch was statistically significant, F(1, 30) = 4.27, p = .048, 휂̂ 2 = .13. From Tables 33 and 35, in fact this result, suggesting that 휂̂2 = .27 was greater than 휂2 = .22.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score 268

(fpAch) was the dependent variable. The results of parameter estimates of final achievement scores by IP team are presented in Table 36.

Table 36

Parameter Estimates of Final Perceived Achievement Score, by Participatory Group and

IP Team

95% CI

Parameter B SE t p LB UB 휂̂ 2 intercept 22.28 2.62 8.51 .001 16.93 27.62 .71 ipAch 0.29 0.13 2.31 .03 0.03 0.55 .15 dtmA -1.56 2.31 -0.68 .50 -6.27 3.15 .02 dtmB -3.26 2.33 -1.40 .17 -8.02 1.51 .06 dtmC 0.11 2.31 0.05 .96 -4.60 4.82 .00 dtmD -1.31 2.46 -0.53 .60 -6.33 3.71 .01 dtmE -0.33 2.55 -0.13 .90 -5.54 4.87 .00 dtmF 0.46 2.55 0.18 .86 -4.76 5.68 .00 dtmG 0.94 2.46 0.38 .70 -4.08 5.96 .01 dtmH 2.40 2.32 1.04 .31 -2.33 7.12 .03 dtmKa 0 a This parameter is set to zero because it is redundant.

269

Table 36 presents the parameter estimate of final perceived achievement scores by students’ majors. From Table 36, the parameter estimates corresponded to the slope coefficients and the values the dummy variables for IP teams were set. The IP team A

(dtmA) = 1, IP team B (dtmB) = 2, IP team C (dtmC) = 3, IP team D (dtmD) = 4, IP team

E (dtmE) = 5, IP team F (dtmF) = 6, IP team G (dtmG) = 7, IP team H (dtmK) = 8, IP team K (dtmK) = 0 provide information about contrast between the adjusted final perceived achievement means of the dtmA versus dtmK, dtmB versus dtmK , dtmC versus dtmK, dtmD versus dtmK, dtmE versus dtmK, dtmF versus dtmK, dtmG versus dtmK, dtmH versus dtmK *ipAch respectively.

From Table 36, within the participatory group, after adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team A group was 1.56 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = -0.68, p = .50, 휂̂ 2 = .02, 95% CI = [-6.27, 3.15]. In addition, after adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team B group was

3.26 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = -1.40, p = .17, 휂̂ 2 = .06, 95% CI = [-8.02, 1.51]. Further, after adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

C group was 0.11 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = 0.05, p = .96, 휂̂2 = .00, 95% CI = [-4.60, 4.82]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

D group was 1.31 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = -0.53, p = .60, 휂̂ 2 = .01, 95% CI = [-6.33, 3.71]. After 270 adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

E group was 0.33 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = -0.13, p = .90, 휂̂ 2 = .00, 95% CI = [-5.54, 4.87]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

F group was 0.46 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = 0.18, p = .86, 휂̂2 = .00, 95% CI = [-4.76, 5.68]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

G group was 0.94 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = 0.38, p = .70, 휂̂2 = .01, 95% CI = [-4.08, 5.96]. After adjustment for the ipAch covariate, the difference between mean fpAch scores for IP team

H group was 2.40 points lower than for the IP team K group, and this difference was not statistically significant, t(30) = 1.04, p = .31, 휂̂2 = .03, 95% CI = [-2.33, 7.12].

Research question 8. The eighth research question was, “How does the direct instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated; and

ANOVA and ANCOVA were conducted.

Hypothesis 8. The null hypothesis (Ho) states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores. 271

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 − 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 = 0). The alternative hypothesis (Ha) states that the final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction will be statistically significantly greater than will the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores.

(퐻푎: 휇푢푛푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡 > 휇푎푑푗푓푝퐴푐ℎ−푑푖푟푒푐푡). The null hypothesis was tested using

ANCOVA through Type I Method of Sum of Squares in GLM (a hierarchical approach).

The means and standard deviations of the initial perceived achievement scores by IP teams, the observed and adjusted mean of the final perceived achievement scores by IP teams are presented in Table 37.

272

Table 37

Initial, Final, and Adjusted Perceived Achievement Means, and Standard Deviations of

Students’ IP Team, by Direct Instruction (n = 50)

ipAch fpAch IP Team M SD Unadjusted M Adjusted M SD A 19.83 1.72 24.67 24.71 2.58 B 18.33 1.75 23.33 23.78 3.39 C 19.29 2.69 25.14 25.33 3.34 D 22.67 2.16 28.00 27.28 3.03 E 20.50 4.28 23.50 23.36 3.08 F 22.40 3.78 25.40 24.75 5.13 G 18.25 2.06 24.25 24.72 3.95 H 17.40 1.67 23.20 23.89 2.49 K 20.80 3.11 27.20 26.98 1.30 Grand 19.98 3.03 24.98 24.98 3.39

Table 37 presents initial perceived achievement scores, and adjusted and unadjusted IP team means for the final perceived achievement scores. From Table 37, the rank ordering to some of the IP team means was not changed by adjustment for IP teams B, D and K in the direct instructional group. However, after adjustment, both unadjusted and adjusted grand means remained the same for the direct instruction group

(24.98). After adjustment for initial perceived achievement scores, within the direct instructional group the students’ IP team that rated final perceived achievement higher

(team D; 푀 = 27.28) and lower (team E; M = 23.36). In fact, the final perceived achievement means was perceived highest by IP team D (M = 28.00), and IP team K (M 273

= 27.20); and perceived lowest by IP team H (M = 23.20), IP team B (M = 23.33), and IP team E (M = 23.50).

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the ANCOVA are presented in Table

38.

Table 38

Effect of Direct Instruction on Students’ Initial and Final Perceived IP Team

Achievement Scores (ANOVA)

Sources SS df MS F p 휂2

Between Groups 128.41 8 16.05 1.51 .18 .23

Within Groups 434.57 41 10.60

Total 562.98 49

Table 38 presents a one-way ANOVA summary result. From Table 38, for the direct group, there was no statistically significant difference in the final perceived achievement scores for the nine IP teams, F(8, 41) = 1.51, p = .18, ƞ̂ 2 = .23.

The null hypothesis was tested using univariate ANCOVA through Type III

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived 274

achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the interaction between IP team and

covariate through SPSS GLM ANCOVA Type III with a custom model are presented in

Tables 39.

Table 39

Interaction between Covariate and IP Team Using GLM ANCOVA Type III SS, by Direct

Instruction

Sources Types III SS df MS F p 휂̂ 2 ipAch 32.89 1 32.89 3.40 .08 .10

IP team 114.06 8 14.26 1.47 .21 .27

IP team*ipAch 102.30 8 12.79 1.32 .27 .25

Error 309.96 32 9.69

Total 562.98 49

Table 39 presents the results of a preliminary ANCOVA Type III Sum of Squares

of ipAch, IP team, and the interaction between IP team and ipAch. From Table 39, an

initial perceived achievement score by IP team (ipAch * IP team) interaction term was

not statistically significant: for the direct instruction, F(8, 32) = 1.32, p = .27, 휂̂2 = .25.

This indicated no significant violation of the homogeneity of regression (or no treatment-

by-covariate interaction) assumption. 275

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of the differences between the adjusted and unadjusted means on the final perceived achievement by HSP students IP team after controlling for the covariate through GLM ANCOVA Type I SS with custom model are presented in Table 40.

Table 40

Effect of Direct Instruction on Students’ Final Perceived IP Team Achievement Scores,

Controlling for their initial Perceived IP Team Achievement Scores (ANCOVA)

Sources Type I SS df MS F p 휂̂ 2 ipAch 75.87 1 75.87 7.36 .01 .16

IP team 74.85 8 9.36 0.91 .52 .15

Error 412.25 40 10.31

Total 562.98 49

Table 40 shows the result of the effect of direct instruction with IP teams on HSP students’ final perceived achievement scores after controlling initial perceived achievement scores. From Table 40, the main effect of direct instruction with IP teams in the ANCOVA was not statistically significant, F(5, 78) = 0.91, p = .52, 휂̂ 2 = .15, when 276 controlling for ipAch. This result suggested that direct instruction with IP team uniquely explained 15% of final perceived achievement score.

The null hypothesis was tested using univariate ANCOVA through Type I

Method of Sum of Squares in GLM (a hierarchical approach). The initial perceived achievement score (ipAch) was the covariate, and the final perceived achievement score

(fpAch) was the dependent variable. The results of parameter estimates of final achievement scores by IP team are presented in Table 41.

277

Table 41

Parameter Estimates of Final Perceived IP Team Achievement Variable for Direct

Instruction

95% CI

Parameter B SE t p LB UB 휂̂ 2 intercept 21.60 4.07 5.32 .001 13.39 29.82 .41 ipAch 0.27 0.18 1.47 .15 -0.10 0.64 .05 dtmA -2.27 1.95 -1.17 .25 -6.22 1.67 .03 dtmB -3.20 2.00 -1.61 .12 -7.24 0.83 .06 dtmC -1.65 1.90 -0.87 .39 -5.49 2.19 .02 dtmD 0.30 1.97 0.15 .88 -3.69 4.29 .00 dtmE -3.62 1.95 -1.86 .07 -7.55 0.31 .08 dtmF -2.23 2.05 -1.09 .28 -6.38 1.92 .03 dtmG -2.26 2.20 -1.03 .31 -6.72 2.19 .03 dtmH -3.09 2.12 -1.45 .15 -7.38 1.21 .05 dtmK 0

Table 41 presents the parameter estimates that correspond to the slope coefficients for the dummy variables for IP teams. The IP team A (dtmA) = 1, IP team B (dtmB) = 2,

IP team C (dtmC) = 3, IP team D (dtmD) = 4, IP team E (dtmE) = 5, IP team F (dtmF) =

6, IP team G (dtmG) = 7, IP team H (dtmK) = 8, IP team K (dtmK) = 0 provide information about contrast between the adjusted final perceived achievement means of 278 the dtmA versus dtmK, dtmB versus dtmK , dtmC versus dtmK, dtmD versus dtmK, dtmE versus dtmK, dtmF versus dtmK, dtmG versus dtmK, dtmH versus dtmK *ipAch respectively. Preliminary data screening was done; scores on ipAch and fpAch were reasonably normally distributed with no extreme outliers. The scatter plot for ipAch and fpAch showed a linear relation with no bivariate outliers. Within the direct group, scores on the ipAch did not differ significantly across groups, F(8, 41) = 2.37, p = .03, 휂2 = .32; however, students in IP team H rated scores slightly lower and IP team D rated scores higher on initial perceived achievement (ipAch) of the IOM standards.

Table 41 shows that within the direct group, after adjustment for the ipAch covariate the difference between mean final perceived achievement scores for the IP team

A was 2.27 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -1.17, p = .25, ƞ̂ 2 = .03. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team B was 3.20 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -1.61, p = .12, ƞ̂ 2 = .06. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team C was 1.65 points lower than for the IP team K, and this difference was not statistically significant: t(40) =- 0.87, p = .39, ƞ̂ 2 = .02. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team D was 0.30 points lower than for the

IP team K, and this difference was not statistically significant: t(40) = 0.15, p = .88, ƞ̂ 2 =

.00. After adjustment for the ipAch covariate, within the direct group the difference 279 between mean final perceived achievement scores for the IP team E was 3.62 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -

1.86, p = .07, ƞ̂ 2 = .08. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team F was

2.23 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -1.09, p = .28, ƞ̂ 2 = .03. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team G was 2.26 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -1.03, p = .31, ƞ̂ 2 = .03. After adjustment for the ipAch covariate, within the direct group the difference between mean final perceived achievement scores for the IP team H was 3.09 points lower than for the IP team K, and this difference was not statistically significant: t(40) = -1.45, p = .15, ƞ̂ 2 = .05.

Research question 9. The ninth research question was, “What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?” In order to answer this question, a hypothesis was formulated; and partial correlation and hierarchical multiple regression were conducted.

Hypothesis 9. The null hypothesis (Ho) states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. 280

The alternative hypothesis (Ha) states that final perceived achievement score for group of

HSP students from various IP teams taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The correlations of fpAch and IP teams, and fpAch and instructional type are presented in

Table 42.

281

Table 42

Summary of Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a Function of Instruction, by Inter-professional Team

Measure 1 2 3 4 5 6 7 8 9 10 11 12 13

1. fpAch 1

2. dA -.04 1

3. dB -.17 -.15 1

4. dC .02 -.15 -.15 1

5. dD .15 -.14 -.14 -.13 1

6. dE -.14 -.13 -.13 -.12 -.11 1

7. dF .11 -.13 -.13 -.13 -.11 -.11 1

8. dG .09 -.12 -.12 -.12 -.11 -.10 -.10 1

9.dH .08 -.14 -.14 -.13 -.12 -.11 -.11 -.11 1

10.K .08 -.12 -.12 -.12 -.11 -.10 -.10 -.10 -.11 1

11. trtD -.28** -.05 -.10 .03 .12 .14 .05 -.07 -.09 .11 1

12. trtP .39** .08 .11 -.03 -.03 -.14 .06 .07 .04 -.12 -.93** 1

13. ipAch .27* .08 .04 .00 .22 .02 .30 -.01 -.13 -.01 .24 .13 1

Note. (N = 90); * p < .05 (2-tailed); **p< .001; Corrected alpha= .008

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The partial correlations of fpAch and IP team, and fpAch and instructional type are presented in Table 43.

282

Table 43

Summary of Partial Correlation Coefficients, Means, and Standard Deviations for Scores on ipAch and fpAch as a Function of Instruction, by Inter-Profession Team

Measure 1 2 3 4 5 6 7 8 9 10 11 12 13

1. fpAch 1

2. dA -.06 1

3. dB -19 -.16 1

4. dC .02 -.15 -.15 1

5. dD .09 -.16 -.15 -.14 1

6. dE -.16 -.13 -.13 -.12 -.12 1

7. dF .03 -.16 -.15 -.13 -.20 -.12 1

8. dG .09 -.12 -.12 -.12 -.11 -.10 -.10 1

9.dH .12 -.13 -.13 -.13 -.10 -.11 -.08 -.11 1

10.K .09 -.12 -.12 -.12 -.11 -.10 -.10 -.09 -.11 1

11. trtD -.37** -.07 -.11 .03 .06 .14 -.02 -.07 -.06 .12 1

12. trtP .37** .07 .11 -.03 -.06 -.14 .02 .07 .06 -.12 -1.00** 1

M 26.08 2.68 2.62 2.49 2.39 1.87 2.23 1.69 1.98 1.66 11.10 8.5

SD 3.58 6.92 6.79 6.57 6.88 5.83 6.86 5.52 5.69 5.46 10.23 10.0

Note. (N = 90); * p < .05 (2-tailed); **p< .001; Cr alpha= .008

Table 43 presents results of partial correlation coefficients between fpAch and IP team levels, controlling for ipAch scores. From Table 43, the results indicated that the partial correlation between fpAch and participatory instruction was .37 (p = .001) which 283 was considerably less than the correlation when the effect of ipAch was not controlled for

(r = .39, p = .001) (see Table 42).

In fact, from Tables 42 and 43, the results show that the correlation coefficient was almost the same as what it was before. Although this correlation was still positive and statistically significant (its p-value was still below .05), the relationship between fpAch and participatory instruction with IP team was increased. In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction with IP team uniquely shared 14% of the variance in fpAch (compared to 15% when ipAch was not controlled). However, the partial correlation between fpAch and direct instruction with IP team was -.37 which was considerably greater than the correlation when the effect of ipAch was not controlled for (r = -.28). In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still negative and statistically significant (its p-value was still below .05), the relationship between direct instruction with IP team and fpAch diminished. In terms of variance, the value of

R2 for the partial correlation was .14, which means that direct instruction with IP team uniquely shared 14% of the variance in fpAch (compared to 8% when ipAch was not controlled). This result suggested that there was a negative and statistically significant relationship between direct instruction with IP team and fpAch when ipAch was controlled for.

Examining the correlation coefficients, both IP teams C and G had positive coefficients that remained the same (r = rp =.02) and (r = rp = .09) respectively, very close to 0 and not statistically significant. Before controlling for ipAch, IP teams C, D, F, 284

G, H, and K had positive correlation coefficients with D being the highest (r = .15) and C the lowest (r = .02); whereas IP teams A, B, and E had negative correlation coefficients with B being the largest (r = -.17) and A being the lowest (r = -.04). After controlling for ipAch, IP teams C, D, F, F, G, H, and K had positive partial correlation coefficients with

H being the highest (r = .12) and C being the least (r = .02), whereas IP teams A, B, and

E had negative partial correlation coefficients with B largest (r = -.19) and A being the least (r = -.06). In terms of variance, the value of R2 for the positive partial correlation ranged from .00 to .01, suggesting that levels of IP team shared nearly 1%of the variance in fpAch (compared to the value of R2 for the positive correlation ranged from .00 to .02, suggesting that levels of IP team shared nearly 2% of the variance in fpAch when ipAch was not controlled), suggesting that levels of IP team had positive and no statistically significance influence on the final perceived achievement score.

The null hypothesis was tested using partial correlation and hierarchical multiple regression approaches. The summary of hierarchical multiple regression analysis for the relationship between fpAch and IP team, and fpAch and instructional types with ipAch are presented in Table 44.

285

Table 44

Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived

Achievement, by IP Team

Variable B t 푠푟2 R 푅2 Δ푅2

Step 1 .27 .07 .07 ipAch .25 2.60* .07

Step 2 .39 .15 .08 ipAch .23 2.04* .04 dB-ipAch -.06 -0.80 .01 dC-ipAch .03 0.45 .00 dD-ipAch .07 0.91 .01 dE-ipAch -.06 -0.74 .01 dF-ipAch .03 0.53 .00 dG-ipAch .08 0.96 .01 dH-ipAch .09 1.12 .01 dK-ipAch .08 0.95 .01

Step 3 .54 .29 .14 ipAch .17 1.61 .02 dB-ipAch -.06 -0.98 .01

286

Table 44 (continued)

Variable B t 푠푟2 R 푅2 Δ푅2 dC-ipAch .05 0.74 .01 dD-ipAch .09 1.33 .02 dE-ipAch -.02 -0.23 .00 dF-ipAch .05 0.70 .00 dG-ipAch .07 0.99 .01 dH-ipAch .09 1.20 .01 dK-ipAch .12 1.52 .02 trtP-ipAch .14 3.90** .14

Note. N = 90; * p < .05, ** p < .001.

Table 44 shows the results of hierarchical regression analysis for variables of IP team predicting final perceived achievement. From Table 44, the hierarchical multiple regression revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .01 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing the IP team variables explained an additional 8% of variation in final perceived achievement and this change in 푅2 was not significant, F(8, 80) = 0.99, p = .45. In Step 3, adding instructional type (trtP) to the regression model explained an additional 14% of the variation in final perceived achievement and this change in 푅2was positive and statistically significant, F(1, 79) =

15.17, p = .001. When all three independent variables were included in Step 3 of the 287 regression model, neither IP team B nor IP team E was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 14% of the variation in final perceived achievement. Together the three independent variables accounted for 29% of the variance in final perceived achievement.

Other Findings

Table 45

Summary of Hierarchical Regression Analysis for Variables Predicting Final Perceived

Achievement, all of the Demographics Variables

Variable B t 푠푟2 R 푅2 Δ푅2

Step 1 .27 .07 .07 ipAch .40 2.60* .07

Step 2 .51 .25 .18 ipAch .40 2.94** .09 dBSN-ipAch -.13 -1.33 .02 dNUT-ipAch -.11 -1.51 .02 dSLP-ipAch .05 0.78 .01 dSW-ipAch .00 -0.01 .00 dOther-ipAch .09 1.33 .02

288

Table 45 (continued)

Variable B t 푠푟2 R 푅2 Δ푅2 dFem -.02 -0.42 .00 dGrad -.11 -1.35 .02 dWkal .05 1.16 .01 dB-ipAch -.06 -0.77 .01 dC-ipAch -.11 -1.44 .02 dD-ipAch .01 0.11 .00 dE-ipAch .04 0.47 .02 dF-ipAch -.09 -1.10 .01 dG-ipAch .01 0.13 .00 dH-ipAch .06 0.65 .00 dK-ipAch .08 0.90 .01

Step 3 .60 .36 .11 ipAch .31 2.42* .05 dBSN-ipAch -.08 -0.89 .01 dNUT-ipAch -.09 -1.26 .01 dSLP-ipAch .05 0.86 .01 dSW-ipAch -.02 -0.28 .00 dOther-ipAch .09 1.40 .02

289

Table 45 (continued)

Variable B t 푠푟2 R 푅2 Δ푅2 dFem .00 -0.02 .00 dGrad -.08 -1.12 .01 dWkal .03 0.64 .00 dB-ipAch -.06 -0.86 .01 dC-ipAch -.12 -1.59 .02 dD-ipAch .02 0.26 .00 dE-ipAch .05 0.73 .01 dF-ipAch -.06 -0.85 .01 dG-ipAch .01 0.14 .00 dH-ipAch .05 0.62 .00 dK-ipAch .09 1.14 .01 trtP-ipAch .13 3.41** .11

Note. N = 90; * p < .05, ** p < .001.

Table 45 shows the results of hierarchical regression analysis for variables of demographics variables predicting final perceived achievement. From Table 45, the hierarchical multiple regression revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .01 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing all of the demographic variables explained an additional 18% of variation in final perceived 290 achievement and this change in 푅2 was not significant, F(16, 72) = 1.11, p = .36. In Step

3, adding instructional types to the regression model explained an additional 11% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 71) =

11.62, p = .001. When all three independent variables were included in Step 3 of the regression model, all of the demographic variables were not significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 11% of the variation in final perceived achievement. Together the three independent variables accounted for 36% of the variance in final perceived achievement.

Qualitative Data Analysis

The analysis of each case and across six cases under various research questions related to HSP student’s perceived achievement of standards in the HSP 5510 course.

The descriptions of the selected students (using pseudonyms) divided by instructional type follow. Information is from variety of sources, including quantitative pre-survey, post-survey, and journal reflection data. Reflecting on the use of both forms of data to explain the results because quantitative data alone would be inadequate, the researcher analyzed the students’ reflection journals and comments the students’ made on the standards during the pre-and post-surveys. In additions, individual selected case quantitative data were retrieved from the main data spreadsheet for critical examination on the standards. According to Creswell and Plano Clark (2011, p. 9), “A need exists to explain initial results,” “Sometimes the results of a study may provide an incomplete understanding of a research problem and there is a need for further explanation. 291

Research question 10 provided answers to this explanation. That is, using qualitative data to explain the quantitative results.

Research question 10. The tenth research question was, “How do the HSP students’ journal reflections help explain their self-concept on standards of a group module project?” In order to answer this question, qualitative analysis was used, and the first cycle coding method was themeing the data and the second cycle coding method was the elaborative coding.

Qualitative Data Analysis

The data for the six cases was retrieved from retrieved from multiple sources to provide the richness and the depth of each case description and included: a) response rating on the standards, b) comments from survey items, c) students’ journal reflections, d) themeing the data, and elaboration of the data. The sources for themeing the data propose structures for analyzing or reflection on themes. In this study, the themes are the standards. The five major themes were “patient-centered care,” “interdisciplinary teams,” “evidence-based practice,” “quality improvement,” and “informatics.”

There are several steps involved in the qualitative analysis. These steps included: a) “preliminary exploration of the data reading through the journal”, b) “coding the data by segmenting and labeling the text”; c) “verifying the codes through inter-coder agreement check”; d) “using codes to develop themes by aggregating similar codes together”; e) “connecting a case study narrative composed of descriptions and themes”; and f) “cross-case thematic analysis”. “Credibility of the findings was secured by triangulating different sources of information, inter-coder agreement, rich and thick 292 descriptions of the cases” (Creswell & Plano Clark, 2011, pp. 308-309). The goal is to achieve more statements of students’ behaviors that help explain the findings of the quantitative analysis. According Auerbach and Silverstein (2003, p. 104; as cited in

Saldaňa, 2009), “elaborative coding ‘is the process of analyzing textual data in order to develop theory further’” (p. 168). In elaboration process, the data were grouped together into categories, and the themes emerge from the coded data.

The explanations of students’ self-concept on standards of the group module project were drawn from the students’ journal reflections. Authenticity was provided by using the words of the six students with pseudonyms used to protect their identity. The analysis of each case and across six cases yielded some useful quotes related to their self- concepts in the IOM standards (see Appendix K). Table 46 presents the demographic data of the selected HSP students for the two instructional groups.

Table 46

Demographic Data for the Selected HSP Students, by Two Instructional Types

Demographic Participatory Group Direct Group (Post)

Age PA: 21 DA: 24

PB: 21 DB: 23

PC: 23 DC: 22

Gender PA: Male DA: Female

PB: Female DB: Male

PC: Female DC: Female 293

Table 46 (continued)

Demographic Participatory Group Direct Group (Post)

Status PA: Undergraduate DA: Graduate

PB: Undergraduate DB: Undergraduate

PC: Graduate DC: Undergraduate

Major PA: Music Therapy DA: Audiology

PB: Music Therapy DB: Nursing

PC: Speech-Language Therapy DC: Nursing

IP Team PA: Ohioans (Team 10) DA: Health Avenger (Team 18)

PB: Clinical CR3W (Team 11) DB: Health Avenger (Team 18)

PC: Ohioans (Team 10) DC: Interdisciplinary Dreams (Team

20)

Team Pref PA: Working alone DA: Working in teams

PB: Working in teams DB: Working alone

PC: Working alone DC: Working in teams

Table 46 shows the demographic information of the HSP students selected for qualitative phase 2. From Table 46, PA was 21 years old male and undergraduate Music

Therapy (MT) student. He was in fall 2013/2014 cohort participatory group and belonged to team B (#10), namely, Ohioans. The team composition was six members.

He preferred working alone. 294

PB was 21 years old female undergraduate Music Therapy (MT) student. She was in fall 2013/2014 cohort participatory group and belonged to team C (#11) called the

Clinical CR3W. The team composition was five members. She preferred working in teams.

PC was 23 years old female graduate Speech Language Pathology (SLP) student.

She was in fall 2013/2014 cohort participatory group and belonged to team B (#10), namely, Ohioans. The team composition was six members. She preferred working alone.

DA was 24 years old female and graduate Audiology student. He was in fall

2014/2015 cohort direct group and belonged to team A (#18), namely, Health Avengers.

The team composition was six members. She preferred working in teams.

DB was 23 years old male undergraduate Nursing student. He was in fall

2014/2015 cohort direct group and belonged to team A (#18) called the Health Avengers.

The team composition was six members. She preferred working alone.

DC was 22 years old female undergraduate Nursing student. She was in fall

2014/2015 cohort direct group and belonged to team C (#20), namely, Interdisciplinary

Dreams. The team composition was seven members. He preferred working in teams.

The codes, keywords with respect to IOM standards are shown in Table 47.

295

Table 47

Codes, Coding, and Keywords, by Standards

Standard Themeing the Data Elaborative Coding (keywords)

(keywords)

Rp Diabetes, antibiotic Liked (60), really (95), thinking (76), conflict, aurasma, interesting (37), feeling (44), need (11), patient provider, cranial because (61), happy (4), excited (8), hope nerves, mobile apps to (6), challenge (6), important (8), helpful inform patients, AAC, (10), help (27), useful (9), believe (5), benefit (6), surprise (7), impressed (2), difficult (9), wish (1), explain (12), aware (3), wonder (4), feedback (4). Rtw Dog bite, Burn case, Expert opinion, online collaborative tools, online planning tools, professional branding, LinkedIn, Twitter, Major swap Re EBP, Problem Solving, When apps do a health professionals job, medical controversy, PubMed, google- hangout

296

Table 47 (continued)

Standard Themeing the Data (keywords) Elaborative Coding

(keywords)

Rq Infographic, HIPAA app, Expert opinion 2, creating a safety newsletter, patient safety Rinf Explain Everything apps, Healthcare Informatics, Photos, Professional interview

Students’ Comments on Survey by Standards

Patient-centered care knowledge. On patient-centered care survey item, PB rated it from a 2 to a 6. She noted that “Patient-centered care is something I knew about before this class but now I know effective ways to implement patient centered care as an interdisciplinary team.” PC rated patient-centered care from a 5 to a 5. She stressed that patient-centered care was “Focusing on the patients, they are more than an ICD-9 code.”

However, DA rated it from a 5 to a 6. She expressed her understanding of patient- centered care as “Putting patients’ beliefs and values first; doing what is best for the patient at all times; basing the treatment on the best interests of the patient; and communicating with other healthcare professionals.” DB rated it from a 7 to a 5. He noted that patient-care is a “Care that is in the best interest of the patient.” Finally, DC rated it a 5 to a 5. She outlined her experience: “As I progress through nursing school, my knowledge of patient-centered care has grown. It is not something that can be taught but instead it is learned as I grow as a nurse. As I gain more experience in the nursing field my knowledge of patient centered care will continue to grow.” Thus, the results 297 suggesting HSP students were knowledgeable in patient-centered care standard. Thus, participatory instruction had changes ranged of 4 points, whereas direct instruction had a change of 1 point, a loss of 2 points; suggesting that participatory instruction had increased HSP students’ self-concept in patient-centered care more than direct instruction.

Interdisciplinary teamwork. On interdisciplinary teamwork, PB rated it from a

4 to a 7. She claimed that “After completing this course, I feel like a total expert on this topic.” PC rated interdisciplinary teamwork from a 5 to a 5. She believed to “Work with other professions to provide the best care possible.” However, DA rated interdisciplinary teamwork from a 5 to a 6. She expressed her understanding as “Working in teams with other healthcare professionals to determine the best treatment for the patient; pulling together all professionals knowledge.” DB rated interdisciplinary teamwork from a 5 to a

4. He reported “Obviously it is the teamwork of different professions to provide care.

However, I do not yet know how to apply it completely.” Finally, DC rated interdisciplinary teamwork from a 4 to a 5. She noted that:

Interdisciplinary teamwork is crucial in patient care. No single resource or person

will be able to treat an individual and get them back to a healthy state. I have not

had many experiences with interdisciplinary teamwork yet but I do know that it is

very important in patient centered care. I look forward to gaining more skills and

insight into this area throughout this course, during my clinicals, and she I

become a registered nurse. 298

Thus, participatory instruction had changes ranged from 0 to 3 points, whereas direct instruction had a change of 1 point, a loss of 1 point; suggesting that participatory instruction had increased HSP students’ self-concept in interdisciplinary teamwork more than direct instruction.

Evidence-based practice. On evidence-based practice, PA rated it from a 7 to a

7. He noted that evidence-based practice referred to “Using what has been found best practices in research for clinic purposes.” However, DC rated it from a 4 to a 5. She noted that “Evidence-based practice is necessary to provide the most recent and up to date care for patients. Without new research many healthcare providers would be stuck in old ways that are not the most efficient and/or effective way of treating patients.”

Thus, participatory instruction had a ceiling effect at maximum point at a 7, whereas direct instruction had a 1 point change, suggesting that participatory instruction had increased HSP students’ self-concept in evidence-based practice more than the direct instruction.

Quality improvement. On quality improvement, PB rated it from a 1 to a 5. She claimed “I know more about quality improvement, but am still unsure of real implementations for it.” PC rated it from a 1 to a 5. She stressed that “Just because something is already set in place, doesn't mean it is the best option. Always try to improve.” However, DA rated it from a 4 to a 5. She reported that quality improvement is a “Communication between professionals and improve quality of patients care for an improved patient experience.” Similarly, DB rated it from a 1 to a 1. He noted that quality improvement is “Using evidence to make improvements to care.” Finally, DC 299 rated it from a 5 to a 4. She noted that quality improvements were the “Improvements in healthcare that will ultimately lead to better patient outcomes. This can happen through the use of EBP, quality patient centered care, and advocacy for patients.” Thus, participatory instruction had changes ranged of 4 points, whereas direct instruction had a change of 1 point, a loss of 1 point, and a floor effect at minimum point at 1; suggesting that participatory instruction had increased HSP students’ self-concept in quality improvement more than direct instruction.

Informatics. On the informatics, PB rated it from a 1 to a 6. She reported that

“Before this class I had never really heard of informatics. Now I feel like I am fairly confident on what they are and how to read them.” Similarly, PC rated it from a 1 to a 2.

She noted that “I'm still not quite sure the role/purpose of this.” However, DA rated it from a 3 to a 5. She reported that informatics was “Using technology to communicate, mitigate error, and for knowledge basis.” DB rated it from a 1 to a 4. He also noted that informatics was “Technology used to provide a universal format for providing information.” Finally, DC rated it from a 1 to a 2. She expressed her feeling that “I am starting to understand what informatics is but I do not have a very good grasp on the concept yet to be able to comment.” Thus, participatory instruction had changes ranged from 1 to 5 points, whereas direct instruction had changes ranged from 1 to 3 points, suggesting that participatory instruction had increased HSP students’ self-concept in informatics more than the direct instruction. 300

Challenges/Benefits for HSP Students Learning/Working Together

Problems of working in teams. Individually, HSP students who were in the participatory group expressed their feelings on problems they perceived when working in teams. For example, PA reported that “Everyone pulling his or her own part of work.

Sometimes some group members do not pull their weight and everyone else has to pick up the slack.” PB also reported that “I think that it may be difficult to find times to meet together because I know that we are all busy and at least one of my group members does not live in Athens on the weekends.” However, PC noted “The question above is hard to answer--I enjoy working in a team and working alone. It depends on the project, and it also depends on the group members. I have had too much varied experience to say what I prefer. In a group, it is hard to have everyone equally participate. It becomes hard when people drag their feet or don't respond to emails.” On the contrary, HSP students who were in the direct group expressed their feeling on problems they perceived when working in teams. For example, DA noted “Some problems could be having differing opinions based on one’s field; figuring out a time for meeting with the inter-professional groups; trying to find the best time to meet as there are differing schedules.” DB reported

“Different schedules” as the problem working in teams. Finally, DC noted that “So far we have not worked in teams yet in this course. Some problems that could potentially arise are: others not pulling their weight with a topic, conflicts with times if we need to meet outside of class, and differing opinions.”

Benefits of working in teams. On benefits working in teams, HSP students who were in the participatory group had varied perceptions. For example, PA reported that 301

“Different view point and approaches to the same problem” were the benefits when working in teams. PB also reported that “We worked really well together and were able to help each other out in areas that some people were better than others.” PC noted that

“Knowing each professions scope of practice better, feel more comfortable when making referrals, more comprehensive treatment and assessment outcomes” were some of the benefits. However, HSP students who were in the direct group also provided some benefits when working in teams. For example, DA noted that “Learning more about other healthcare fields and bringing ones skills from differing fields to the table” were some of the benefits. DB also noted, “Multiple perspectives and brainstorming of good ideas” as some of the benefits. Finally, DC reported that “In the past some of the benefits

I have encountered with team work include, different ideas and opinions, and being able to split a large amount of work up to accomplish the final product sooner.”

Problems of working alone. HSP students who were in the participatory group expressed their feelings on the problems they perceived when working alone. For example, PA reported that “Becoming overwhelmed with the amount of work and sometimes having to have someone else help or ask a friend for clarification” were some of the problems working alone. PB expressed her feelings that “I am not the most technological person, so figuring out some of the apps may be difficult for me on my own.” In the same manner, PC noted that “Being left to fend for myself in fields I'm unfamiliar with when working on case studies. Being unsure of what was to be turned in, and missing assignments” were some of the problem when working alone. However,

HSP students who were in the direct group expressed their feelings on problems they 302 perceived when working alone. For example, DA noted “Figuring out all the apps available and working with media. Not having more ideas such as other professional’s background” were some of the problems faced when working alone. DB also reported

“Not having the input of other disciplines to add to my work” was one of the problems when working alone. Finally, DC noted “Only being able to reflect on your own opinion and difficulty finishing on time if a lot of work or research needs to be done in a short amount of time. Working alone could create problems such as lack of opinions or ideas, and time constraints if others cannot help you with research or putting something together.”

Benefits of working alone. On benefits working alone, HSP students who were in the participatory group had varied perceptions. For example, PA noted “Not have to worry about scheduling conflicts. You know how everything will turn out and that you earned the grade by yourself.” PB reported that “The only benefit that I can see from working by myself would not have to plan around other people's schedules. I didn't have to leave my apartment.” Similarly, PC noted “Perhaps bringing your individual ‘plan of attack’ to your group. Getting things done more quickly” were some of the benefits working alone. However, HSP students who were in the direct group also provided some benefits working alone. For example, DA reported “Completing projects on my own schedule” was one of the benefits working alone. DB noted “Flexibility of my own schedule and being able to apply the ideas I think are best” were some of the benefits working alone. Finally, DC noted “Being able to hold yourself accountable for your own grade and the time you spend working on something. You don't have to rely on others, 303 only yourself, being able to work at your own pace, and not having any conflicting ideas.”

Post Perceptions of other Disciplines

Analyses of the students’ perceptions about other disciplines in words, phrases, or descriptions were examined. The results indicated that HSP students held different views of other disciplines. For example, about social works, for HSP students who were in the participatory group, PA perceived social works as “Advocacy, abuse, and under- privileged.” PB also perceived that “Social workers are very important. They help people with the necessities of life if they can't do it for themselves.” PC perceived social works as “Advocate for and allow disadvantaged populations become the best they can be, and understand that they are doing the best they can with what they have.” However, for HSP students who were in the direct group, DA perceived social works as “Helping others who have behavioral or personality disorders. Also helping advocate for the patient as to hospital release.” DB perceived social works as “Profession that dives into the social aspect of different cases. Good for providing resources.” Finally, DC perceived social works as “Person centered, therapy, caseworker, diagnosing, treating, mental health, behavioral health, advocating, child protective services, administrative, support, solving personal and/or family problems.”

Nursing. About nursing, for HSP students who were in the participatory group,

PA perceived nursing as “Bedside, procedures, first-hand; needles, hospital.” PB perceived “Nurses are kind of the essential part behind medical facilities running efficiently.” PC perceived nursing as the “Heart of many hospitals and clinics. 304

Nurturing, interpret lab results, check vital signs, may help to answer questions a family or patient may have.” However, for HSP students who were in the direct group, DA perceived nursing as “Caring for patients while in the nursing home or hospital; distributing medications, pulling blood, and communicating with the doctor.” DB perceived nursing as “The core of healthcare. Nurses provide essential care for the patients and advocate for them. Nurses know the patients better than the doctors, usually.” Finally, DC perceived nursing as a “High quality healthcare, patient focused care, education, medication administration, assessment, collaboration with other healthcare professionals, patient advocate, preventative care, LPN, RN, STNA, CNP,

CRNA.”

Dietician. About dietician, for HSP students who were in the participatory group,

PA perceived a dietician as a “Diet.” PB perceived a dietician as “Working in a private practice to put people who are overweight on a diet plan or making a specific diet for athletes.” PC perceived a dietician as a “Meal plan, gain/lose weight, helps patients on modified diet find foods they may enjoy.” However, for HSP students who were in the direct group, DA perceived a dietician as a “Food, diet, food group, balanced meals.” DB perceived a dietician as a “Profession for providing dietary counseling and guidelines.”

Finally, DC perceived a dietician as a “Registered dietitians, healthy lifestyle, improved health, education, healthy eating, obesity, specialized diets, planning, evaluation, and implementation of food, nutritional facts, my plate, portions.”

Physical therapy. About physical therapy, for HSP students who were in the participatory group, PA perceived physical therapy as a “Balance, stretches, broken 305 bones, mobility.” PB perceived that a “Physical therapy is very important in many settings especially the rehabilitation setting. They work with a wide variety of populations that I did not realize.” PC perceived that physical therapy “Helps to maintain current function and mobility, and helps to rehabilitate to former state, or helps to use residual strength.” However, for HSP students who were in the direct group, DA perceived physical therapy as “Muscle activation, therapy, fitness, and exercises.” DB perceived physical therapy as a “Profession for rehabilitating patients regarding their physical wellness.” Finally, DC perceived physical therapy as “Exercises, rehabilitation, orthopedics, reduce pain, restore function, range of motion, strength, balance, and neurological disorders, and develop fitness and wellness programs, nursing homes, education, patience, sports injuries.”

Speech-language pathology. For HSP students who were in the participatory group, PA perceived speech-language pathology as “Swallowing disorders, speech, stuttering.” PB perceived that “Speech-Language Pathologists also work with a wide range of populations. They work with phonation and articulation along with swallowing disorders and things of that sort.” PC perceived that speech-language pathology

“Prevent, evaluate, diagnose, and treat disorders involving all aspects of communication, swallowing, and executive functioning skills related to language disorders.” However, for HSP students who were in the direct group, DA perceived speech-language pathology as “Speech, articulation, swallowing, aphasia.” DB perceived speech-language pathology as a “Profession dealing with speech and language …Helpful for everything from therapy for speech impediments to guidance for swallowing.” Finally, DC perceived speech- 306 language pathology as “Swallowing studies, speech and language disorders, feeding disorders, voice disorders, clef lip/palate, stroke/TBI, language skills, articulation, aphasia, delayed language disorders.”

Medicine. For HSP students who were in the participatory group, PA perceived medicine as a “Doctor, diagnosis, leader.” PB perceived that “Doctors are important because they have the ability to diagnose and prescribe medicine. Doctors are the decision makers.” PC perceived medicine as a “Primary diagnosis, treat, surgery, prescribe medications.” However, for HSP students who were in the direct group, DA perceived medicine as “Diagnosing, treatment, medication.” DB perceived medicine as

“The basis of western medicine.” Finally, DC perceived medicine as “Holistic medicine, allopathic, osteopathic, structure, function, manipulation, MD, DO, residency, diagnose, treat, prescribe, preventative care, specialties, surgery.”

Music therapy. For HSP students who were in the participatory group, PA perceived music therapy as “Music, guitar, goals.” PB reported that “This is my field, so

I know a lot about it. We work with people from all different populations in many different settings to improve on non-musical goals with the use of music as a therapeutic tool.” PC perceived music therapy as “Alternative forms of therapy to reach persons in a different way, I think of Autism when I hear this profession.” However, for HSP students who were in the direct group, there was no item on Music Therapy for students to provide their comments.

Audiology. For HSP students who were in the direct group, DA perceived that audiology is about “Hearing aids, cochlear implants, vestibular testing, hearing 307 evaluations, and auditory processing disorders.” About audiology, DB perceived he was

“Still not sure but something with the study of hearing.” DC perceived that audiology was a “Hearing test, cochlear implants, hearing aids, impaired hearing, sound, balance, communication, ear canal, hearing rehabilitation.” However, for HSP students who were in the participatory group, there was no item on Audiology for students to provide their comments.

Other qualitative results. DA reported that “I was unaware that the dietician had more schooling. I was unaware that twitter could be used professionally or in the healthcare field. I have never had a twitter account so this was all a new experience. At this point, I did not know how to tweet or follow anybody, which made me feel a little out of the loop.”

DC also reported that “The fifty accounts that were suggested for us to follow that were healthcare related were incredibly interesting. I was unaware that there were so many medical accounts to follow that related to not only me as a nurse but also me as a student.”

Main Findings

The major findings resulting from the analysis of the statistical data presented in this study were the following:

1. Statistical analysis does not support the null hypothesis 1, which states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students taught without 308 using the participatory instructions. Thus, the null hypothesis was rejected, and the alternate hypothesis was accepted. The gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction will be statistically significant greater than the change in overall perceived achievement scores for HSP students taught without using participatory instruction. The participatory instruction group change score was greater than the direct instruction group change score.

2a. Statistical analysis does not support the null hypothesis 2a, which states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the

HSP students who preferred working in teams taught without using the participatory instructions. Thus, the null hypothesis was rejected, and the alternate hypothesis was accepted. The gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction will be statistically significant greater than will the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using the participatory instruction.

2b. Statistical analysis does support the null hypothesis 2b, which states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students 309 who preferred working alone taught without using the participatory instruction.

Therefore, the null hypothesis was not rejected.

2c. Statistical analysis does support the null hypothesis 2c, which states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction. Therefore, the null hypothesis was not rejected.

2d. Statistical analysis does support the null hypothesis 2d, which states that there is no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction. Therefore, the null hypothesis was not rejected.

3. Statistical analysis does support the null hypothesis 3, which states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction and the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores. Therefore, the null hypothesis was not rejected.

4. Statistical analysis does support the null hypothesis 4, which states that there is no statistically significant difference among the final perceived achievement scores for 310 the HSP students from various majors taught using a direct instruction and the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. Therefore, the null hypothesis was not rejected.

5. Statistical analysis does not support the null hypothesis 5, which states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. Thus, the null hypothesis was rejected, and the alternate hypothesis was accepted. The final perceived achievement score for group of HSP students from various majors taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

6. Statistical analysis does not support the null hypothesis 6, which states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. Thus, the null hypothesis was rejected, and the alternate hypothesis was accepted. The final perceived achievement score for group of HSP students from various team preferences taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in 311 their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

7. Statistical analysis does support the null hypothesis 7, which states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a participatory instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the participatory instructions, controlling for their initial perceived achievement scores. Therefore, the null hypothesis was not rejected.

8. Statistical analysis does support the null hypothesis 8, which states that there is no statistically significant difference among the final perceived achievement scores for the HSP students from various IP teams taught using a direct instruction and the final perceived achievement scores for the HSP students from various IP teams taught using the direct instructions, controlling for their initial perceived achievement scores.

Therefore, the null hypothesis was not rejected.

9. Statistical analysis does not support the null hypothesis 9, which states that there is no positive and statistically significant impact on the HSP students’ instructional type and the final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. Thus, the null hypothesis was rejected, and the alternate hypothesis was accepted. The final perceived achievement score for group of HSP students from various IP teams taught using participatory instruction will have positive and statistically significant impact on the final perceived achievement scores in their IP teams with regard 312 to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

10. How do the HSP students’ journal reflections help explain their self-concept on standards of a group module project?

Patient-centered care. Participatory instruction had changes ranged of 4 points, whereas direct instruction had a change of 1 point, a loss of 2 points; suggesting that participatory instruction had increased HSP students’ self-concept in patient-centered care than direct instruction.

Interdisciplinary teamwork. Participatory instruction had changes ranged from

0 to 3 points, whereas direct instruction had a change of 1 point, a loss of 1 point; suggesting that participatory instruction had increased HSP students’ self-concept in interdisciplinary teamwork than direct instruction.

Evidence-based practice. Participatory instruction had a ceiling effect at maximum point at a 7, whereas direct instruction had a 1 point change, suggesting that participatory instruction had increased HSP students’ self-concept in evidence-based practice than the direct instruction.

Quality improvement. Participatory instruction had changes ranged of 4 points, whereas direct instruction had a change of 1 point, a loss of 1 point, and a floor effect at minimum point at 1; suggesting that participatory instruction had increased HSP students’ self-concept in quality improvement than direct instruction.

Informatics. Participatory instruction had changes ranged from 1 to 5 points, whereas direct instruction had changes ranged from 1 to 3 points, suggesting that 313 participatory instruction had increased HSP students’ self-concept in informatics than the direct instruction.

Summary

A two-tailed t-test of independent samples was used to test hypotheses 1 and 2;

ANOVA and ANCOVA were used to test hypotheses 3, 4, 7, and 8; and partial correlation and hierarchical multiple regressions were used to test hypotheses 5, 6, and 9.

The research question ten was based on the students’ journal reflection, focusing on explaining the results of quantitative analysis. Themeing the data and elaborating coding qualitative data strategies were used to answer research question 10. The statistical analysis does not support the null hypotheses 1, 2a; 5, 6, and 9; thus, the null hypotheses were rejected, and the alternate hypotheses were accepted. On contrary, the statistical analysis does support the null hypotheses 2b, 2c, 2d, 3, 4, 7, and 8; therefore, the null hypothesis was not rejected. The results of qualitative data analyses of research question

10 were used to elaborate the significant gains found on hypotheses 1 and 2a.

314

Table 48

Summary Table of Independent Samples t-Test Results

Q1 Q2a Q2b Q2c Q2d

Wkt Wka P D

P) vs D P vs D P vs D Wkt vs Wka Wkt vs Wka

Var t(88) d t(61) d t(25) d t(38) D t(48) d

Rp 3.55** 0.74 3.14* 0.84 1.42 0.54 0.57 0.19 0.08 0.04

Rtw 4.95** 1.03 3.92** 1.00 2.41* 0.99 -0.84 -0.30 -0.90 -0.31

Re 2.20* 0.46 3.50** 0.88 -0.09 -0.04 1.19 0.40 -0.71 -0.28

Rq 1.06 0.23 0.02* 0.01 1.61 0.66 -1.03 -0.35 0.98 0.36

Rinf 1.78 0.38 2.11* 0.55 0.09 0.04 0.98 0.31 -0.56 -0.20 cpAch 3.78** 0.79 3.47** 0.88 1.53 0.62 0.08 0.03 -0.28 -0.09

Note. ** p <.001; * p < .05; P = Participatory; D = Direct; Wkt = Working in teams;

Wka = Working alone. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re

= evidence-based practice, Rq = quality improvement, Rinf = informatics; cpAch = change in overall perceived achievement

315

Table 49

Summary Table of ANOVA and ANCOVA Results

Major IP team

Q3P Q4D Q7P Q8D

ANOVA 53.99 42.79 97.70 128.41

F 0.95 0.72 1.10 1.51

휂2 .12 .08 .22 .23

ANCOVA 47.15 51.89 108.16 74.85

F 0.88 1.03 1.39 0.91

휂̂ 2 .12 .11 .27 .15

Note. P = Participatory instruction; D = Direct instruction

316

Table 50

Summary of Zero-Order and Partial Correlations Results, by Major

Table 26 Table 27

Var (Q5M) r(P) r(D) rp(P) rp(D) 푃푟2 퐷푟2 푟푝2

BSN -.13 .22* -.17 .17 .02 .05 .03

PT .08 -.02 .06 -.06 .01 .00 .00

NUT -.04 .08 -.06 .06 .00 .01 .00

SLP -.04 -.05 -.01 .01 .00 .00 .00

SW .12 -.09 .11 -.11 .01 .01 .01

Others .09 -.04 .07 -.07 .01 .00 .00 fpAch .39** -.28* .37** -.37** .15 .08 .14

Note. Q5M = Research question 5 Major; r(P) and rp(P) = Zero-order and partial correlations between Major and final perceived achievement with participatory instruction; r(D) and rp(D) = Zero-order and partial correlations between Major and final perceived achievement with direct instruction.

317

Table 51

Summary of Zero-Order and Partial Correlations Results, by Team Preference

Table29 Table30

Q6TP r(P) r(D) rp(P) rp(D) 푃푟2 퐷푟2 푟푝2

Wkt -.15 .25* -.19 .19 .02 .06 .04

Wka .20 -.16 .19 -.19 .04 .03 .04 fpAch .39** -.28* .37** -.37** .15 .08 .14

Note. Q6TP = Research question 6 team preference; r(P) and rp(P) = Zero-order and partial correlations between team preference and final perceived achievement with participatory instruction; r(D) and rp(D) = Zero-order and partial correlations between team preference and final perceived achievement with direct instruction.

318

Table 52

Summary of Zero-Order and Partial Correlations Results, by IP Teams

Table 42 Table 43

Var (Q9IPT) r(P) r(D) rp(P) rp(D) 푃푟2 퐷푟2 푟푝2

A .08 -.05 .07 -.07 .01 .00 .00

B .11 -.10 .11 -.11 .01 .01 .01

C -.03 .03 -.03 .03 .00 .00 .00

D -.03 .12 -.06 .06 .00 .01 .00

E -.14 .14 -.14 .14 .02 .02 .02

F .06 .05 .02 -.02 .00 .00 .00

G .07 -.07 .07 -.07 .00 .00 .00

H .04 -.09 .06 -.06 .00 .00 .00

K -.12 .11 -.12 .12 .01 .01 .01 fpAch .39** -.28* .37** -.37** .15 .08 .14

Note. Q9IPT = Research question 9 inter-professional teams; r(P) and rp(P) = Zero-order and partial correlations between IP team and final perceived achievement with participatory instruction; r(D) and rp(D) = Zero-order and partial correlations between IP team and final perceived achievement with direct instruction.

319

Table 53

Summary Table of Hierarchical Multiple Regression Results

Step1 Step2 Step3

2 2 2 2 2 2 퐹1 푅 ∆푅 퐹2 푅 ∆푅 퐹3 푅 ∆푅

Q5M 6.73* .07 .07 1.00 .12 .05 12.37** .24 .12

Q6TP 6.73* .07 .07 0.43 .08 .01 12.99** .20 .12

Q9IPT 6.73* .07 .07 0.99 .15 .08 15.17** .29 .14

Note. Q5M = Research question 5 on students’ major; Q6TP = Research question 6 on students’ team preference; Q9IPT = Research question 9 on students’ inter-professional teams.

320

Chapter 5: Discussions and Conclusion

This study investigated the use of participatory instruction as a means of teaching

IOM standards in Health Sciences and Professions course on a module project compared to the teaching of the same standards using direct instructional approaches. The study used the perceived achievement levels of the two instructional groups as one basis for comparison, also examined the HSP students’ self-concepts on IOM standards of the two groups. Perceived achievement and self-concept differences by team preference, major, and inter-professions teams were considered. Gains in perceived achievement and self- concepts of the instructional types (experimental and comparative groups) between HSP students working in teams and students working alone; and differences among students’ inter-professional teams, and students’ majors were also considered.

Scope

Included in the study were all of the HSP students enrolled in HSP 4510/5510 course at Ohio University, in the Athens campus. The three cohort groups of HSP students (experimental group) consisted of 2013/2014 (Fall, Spring, and Summer) were taught using the participatory instructional approach on the five IOM standards, while the three cohort group of HSP students (comparative group) consisted of 2014/2015 (Fall,

Spring, and Summer) were taught using direct instructional approach (Lecture methods) on the five IOM standards. Completing the study were 40 students in the participatory group and 50 in the direct group.

Data were collected two times during the study, using the same Self-reported

Knowledge scale on five IOM Standards. The first pre-survey data was collected at the 321 first class meeting where students self-rated their Knowledge and skills (self-concepts) on the five IOM standards through Google Sheets online survey at the first meeting of the semester HSP 5510/4510 course. Post-survey data were collected through Google Sheets for all students the last week of the semesters. Journal reflections of the students on their feeling about the class activities, their interaction with peers, collaborative decision making, their roles and other’s roles, and teamwork were collected through Blackboard

Grade Center assignments every week during the semesters. The raw scores were statistically analyzed for change perceived achievement and self-concept scores with the

SPSS statistical program; the change perceived achievement and self-concept scores were compared between levels of instructional type for gains using independent samples t- tests; HSP students’ final perceived achievement scores were compared among the levels of major with instructional type, and among the levels of IP teams with instructional type using one-way ANOVA for difference; HSP students’ initial perceived achievement score (covariance), the final perceived unadjusted and adjusted mean scores were compared among levels of major, and levels of IP teams using GLM univariate custom model (ANCOVA Type III SS and Type I SS) for interaction effect and difference; partial correlation for relation and direction; and hierarchical multiple regressions and semi-partial correlation for influence and unique variation explained (impact, and relationship) to examine the hypotheses. For hypotheses 1to 9, the self-concept response ratings were weighted as points and converted to numerical scores. Dummy variables were prepared for the independent (categorical) variables. A two-tailed t-test of independent samples was used on the data for hypotheses 1 and 2; ANOVA and 322

ANCOVA were used on the data for hypotheses 3, 4, 7, and 8; and partial correlation and hierarchical multiple regressions were used on the data for hypotheses 5, 6, and 9. The research question ten was based on the students’ journal reflection, focusing on explaining the results of quantitative analysis. The qualitative data analysis strategies employed were themeing the data and elaborating coding.

Discussion

Discussion of findings. The purpose of this quasi-experimental- participatory evaluation study was to examine the effectiveness of participatory instruction and direct instruction on the level of perceived achievement of HSP students; to compare the difference in means and gains of the HSP students’ perceived achievement and self- concept scores on a group module project after a treatment; and to determine whether students’ overall final perceived achievement scores on standards increased or decreased as a result of the treatment, while controlling for overall initial perceived achievement scores, and then following up with purposefully selected typical cases to explore those results on self-concepts in more depth. In the quantitative phase, five standards (“patient- centered care, interdisciplinary teamwork, evidence-based practice, quality improvement, and informatics”) were the predictors to students’ perceived achievement and self- concepts in the HSP course. The qualitative follow up multiple case study analysis provided explanations to the quantitative results. After analyzing the data that was retrieved for this study, the results of the table merit a further discussion. The hypotheses formulated guided the discussions on the findings. 323

Research question 1. The first research question was, Do HSP students who are taught using a participatory instruction have greater gain on the change in overall perceived achievement scores than the HSP students who are taught using the direct instruction? In order to answer this question, a hypothesis was formulated and an independent samples t-test was conducted. Based on the quantitative analysis, the gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction was statistically significant greater than the change in overall perceived achievement scores for HSP students taught without using participatory instruction. In fact, within the standards HSP students in the participatory group having positive and significantly change perceived self-concept gains in patient-centered care, interdisciplinary teamwork, and evidence-based practice, and positive and non- statistically significantly change perceived self-concept gains on quality improvement and informatics; suggesting students in the participatory group had positive self-concepts on the standards. Considering the effect size of standards, it appears ordering of the standards from simply to complex might be as follow: interdisciplinary teamwork (d =

1.03), patient-centered care (d = 0.74), evidence-based practice (d = 0.46), informatics (d

= 0.38), and quality improvement (d = 0.23). This study revealed that there is individual difference (perceived self-efficacy or initial perceived achievement) (Bandura & Locke,

2003). Bandura and Locke note that perceived self-efficacy is just a reflection of initial perceived achievement; and that perceived self-efficacy contributes independently to final perceived achievement after controlling for initial perceived achievement (Bandura

& Locke, 2003). In this study, perceived self-efficacy is just a reflection of students’ 324 initial perceived achievement; and that perceived self-efficacy is the same as final perceived achievement. It would appear that HSP students with high perceived self- efficacy level could rate high initial perceived achievement level on the pre-survey and those students with low perceived self-efficacy level could rate low initial perceived achievement level. It would also appear that HSP students in the participatory group might have low perceived self-efficacy and rate low initial perceived achievement level on the pre-survey. After being taught participatory instruction, HSP students might have high perceived self-efficacy and rate high perceived achievement level on the post- survey. It would also appear that HSP students in the participatory group might have high change perceived achievement scores than the HSP students in the direct instruction group. It is possible that participatory instruction should be used if the goal is to increase

HSP students’ final perceived achievement scores. It appeared that participatory instruction is effective for: interdisciplinary teamwork and patient-centered care. This is because the effect sizes, d = 1.03 and 0.74 tell us the size of the participatory instruction effect was positive and significantly large. In addition, the statistical significance result tells us the likelihood that participatory group results differ from the direct group.

Furthermore, PB claimed that “After completing this course, I feel like a total expert on this topic (interdisciplinary teamwork)”. PB stressed that “Patient-centered care is something I knew about before this class but now I know effective ways to implement patient centered care as an interdisciplinary team”.

The qualitative analysis revealed that HSP students in the direct instruction acknowledged that they had no knowledge about informatics and quality improvement 325 standards. An illustrative excerpt from journal reflection expressing this feeling: DA:

“…Before this PowerPoint, I had no prior knowledge regarding quality improvement and informatics… It was interesting hearing about how evidence based practice was incorporated in each curriculum. All programs have somewhat of a different approach, but EBP is integrated in each profession whether it is classes or research.”

These findings were consistent with previous research. Bandura and Locke

(2003), Bloom (1976), and Gropper (1983a) found that students’ level of achievement increases if instruction is approached sensitively and systematically. Gropper notes that when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Moore (2004) notes that designing instructional sequences helps students gain deep understanding. Moore concludes that implementing appropriate learning strategies may guide students’ behavior to master the content material.

DiGiovanni and McCarthy (in press) found that students rated interdisciplinary teamwork, patient-centered care, and evidence-based practice high but rated informatics and quality improvement low. Bergan (1995) notes, examining curriculum critically may reveal gaps between the school training and the marketplace needs. Ludwigsen and

Albright (1994) recommend that certifying hospital practice, licensing and exempting, and continuing education should be provided. Bray and Rogers (1995) note, training programs should educate professionals about patient evaluation and treatment, and what professional can and cannot offer. Bray and Rogers recommend that the development of standards for training should emphasize collaborative relationships that include a) 326 negotiating communication issues; b) explaining theories; c) ensuring confidentiality; d) identifying time scheduling differences; and e) acknowledging lack of competition in practices. These findings have implication for curriculum design (Bergan, 1995; Bray &

Rogers, 1995; Ludwigsen & Albright, 1994, Moore 2004). More research would be needed to investigate the effectiveness of participatory instruction with IOM standards.

Research question 2. The second research question was, How do HSP students feel about team preference on a group module project with regard to participatory and direct instructional types? To answer this research question, four hypotheses (2a, 2b, 2c, and 2d) were formulated and an independent samples t-test was conducted for each.

Hypothesis 2a.The quantitative results indicated that the gain on the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using participatory instruction was statistically significant greater than the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction. It may be concluded that

HSP students who preferred working in teams taught using participatory instruction had positive and statistically significant large change perceived self-concept gains on patient- centered care, interdisciplinary teamwork, and evidence-based practice, and positive and statistically significant small and moderate change perceived self-concept gains on quality improvement and informatics respectively. Considering the effect size of standards, it appears ordering of the standards might be as follow: interdisciplinary teamwork (d = 1.00), evidence-based practice (d = 0.88), patient-centered care (d = 0.84), informatics (d = 0.38), and quality improvement (d = 0.23). 327

The gain on the change in overall perceived achievement score for HSP students who preferred working in teams taught using participatory instruction and the change in overall perceived achievement score for HSP students who preferred working in teams taught using direct instruction was positive and statistically significant. This suggests that the change in overall perceived achievement score for HSP students who preferred working in teams taught using participatory instruction was greater than the change in overall perceived achievement score for HSP students who preferred working alone taught using direct instruction. In fact, within the standards, the gains on the change perceived self-concept score for HSP students who preferred working in teams taught using participatory instruction and the change perceived self-concept score for HSP students who preferred working in teams taught using direct instruction was positive and statistically significant on interdisciplinary teams. This suggests that, on patient-centered care, interdisciplinary teams, evidence-based practice, quality improvement, and informatics the change perceived self-concept score for HSP students who preferred working in teams taught using participatory instruction was greater than the change perceived self-concept score for HSP students who preferred working in teams taught using direct instruction. This implies that there might be instructional team preference between students working in team and students working alone. It might also be that students in the participatory group have low perceived self-efficacy and rate low initial perceived achievement level on the pre-survey. After being taught with participatory instruction, those students might have high perceived self-efficacy and rate high final perceived achievement level on the post-survey. It would also appear that students in the 328 participatory group might have high change perceived achievement scores than students in the direct instruction group. It is possible that participatory instruction should be used if the goal is to increase students’ working in teams’ final perceived achievement scores.

In particular, participatory instruction is effective for students with low perceived self- efficacy but working in teams: interdisciplinary teamwork (d = 1.03, p < .05), evidence- based practice (d = 0.88, p < .05), and patient-centered care (d = 0.84, p < .05). This is because the effect sizes tell us that the participatory instruction was effective and significantly large for students who preferred working in teams.

Multiple case analyses showed that HSP students who preferred working in teams in the participatory and direct instructions expressed their feeling in their journal reflections. For example, PB reported that “It was definitely helpful to have group members who knew all of the technicalities of what they should have been doing… I think that the feedback was great, but I am not sure if we are going to have time to incorporate everything into our iBook… I had always used the ArticlesPlus option, but it is very useful to have multiple options for search.” DA reported that “I am confident that working with other professions will be an asset in my future career… I would like to have an increased awareness on media pertinent to the healthcare field.” DC reported that

“After the first lecture was over I felt a little intimidated by the course itself. I never expected to be sitting in a class with all graduate students except my fellow nursing students and myself. Having never been in a class with anyone except undergraduate nursing students it was quite a change from what I’m used to. What worried me most 329 was that the other students won’t be able take me seriously because I am still an undergraduate student.”

On the survey comments, students provided some problems encountered when working in teams. For example, PB reported that “I think that it may be difficult to find times to meet together because I know that we are all busy and at least one of my group members does not live in Athens on the weekends.” DC reported that “So far we have not worked in teams yet in this course. Some problems that could potentially arise are: others not pulling their weight with a topic, conflicts with times if we need to meet outside of class, and differing opinions.” On the contrary, students provided some benefits when working in teams. PB reported that “We worked really well together and were able to help each other out in areas that some people were better than others.” DC reported that “In the past some of the benefits I have encountered with team work include, different ideas and opinions, and being able to split a large amount of work up to accomplish the final product sooner.”

These findings were consistent with previous research studies. For example,

Knowles et al. (2012) note, individuals differ in their approaches, strategies, and preferences when learning; that those differences significantly improve learning; that understanding individual differences enhances andragogy (adult learning) more effective in practice. Ducette et al. (1996) note that students enter a learning situation with different skills, preferences, and capacities; that the skills, preferences, and capacities influence their learning; and that matching learner and learning setting enhances learning for the learner. Knowledge of learning style will improve students’ self-concept and 330 achievement (Dunn et al., 1989; Gregorc & Butler, 1984). Self-concept directly relates with self-esteem and future achievement (Trautwein et al., 2006). Trautwein et al. note that isolating students can have an inverse effect on their self-concept leading to reduction in their self-esteem and academic achievement. Trautwein et al. found that self-esteem is not a strong predictor of academic achievement. Working in teams requires students to engage in teamwork (working collaboratively), communication

(communicating the work of the group to outsiders), and reasoning (reasoning to solve problems) processes (Schanks, 1993). These findings have implications for learning styles, preferences, and individual differences (Bray & Rogers, 1995; Ducette et al.,

1996; Knowles et al., 2012; McDaniel, 1995; Schanks, 1993; Trautwein et al., 2006).

Further research needs to investigate the effect of participatory instruction on students’ who preferred working in team perceived achievement and self-concepts on IOM standards.

Hypothesis 2b. The quantitative results indicated that there was no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working alone taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction. In fact, only one of the gains on change perceived self-concepts was positive and statistically significantly large, interdisciplinary teamwork, t(25) = 2.41, p = .02, d = 0.99, 95% CI [0.18, 2.25]; several of the gains on change perceived self-concepts were positive and statistically not significant, patient-centered care, t(25) = 1.42, p = .17, d = 0.54, 95% CI [-0.30, 1.65]; 331 quality improvement, t(25) = 1.61, p = .12, d = 0.66, 95% CI [-0.31, 2.49]; and informatics, t(25) = 0.09, p = .93, d = 0.04, 95% CI [-1.11, 1.21]. However, interestingly there was a negative and statistically not significant gain on change perceived self- concept, evidence-based practice, t(25) = -0.09, p = .93, d = -0.04, 95% CI [-1.12, 1.03].

It appeared students in the direct instruction group with working alone preference might have positive self-concept in evidence-based practice than their counterparts in the participatory instruction group. The gain on the change in overall perceived achievement score for HSP students who preferred working alone taught using participatory instruction and the change in overall perceived achievement score for HSP students who preferred working alone taught using direct instruction was positive and statistically not significant. This suggests that the change in overall perceived achievement score for

HSP students who preferred working alone taught using participatory instruction was greater than the change in overall perceived achievement score for HSP students who preferred working alone taught using direct instruction. It implies that the gain on change in overall perceived achievement score does not exist, suggesting that the instructional preference with students working alone does not matter.

In fact, within the standards, the gains on the change perceived self-concept score for HSP students who preferred working alone taught using participatory instruction and the change perceived self-concept score for HSP students who preferred working alone taught using direct instruction was positive and statistically significant on interdisciplinary teams. This suggests that, on interdisciplinary teams, the change perceived self-concept score for HSP students who preferred working alone taught using 332 participatory instruction was greater than the change perceived self-concept score for

HSP students who preferred working alone taught using direct instruction. In addition, the gains on the change perceived self-concept score for HSP students who preferred working alone taught using participatory instruction and the change perceived self- concept score for HSP students who preferred working alone taught using direct instruction was positive and statistically not significant on patient-centered care, quality improvement, and informatics. These suggest that, on patient-centered care, quality improvement, and informatics, the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction was greater than the change perceived self-concept scores for HSP students who preferred working alone taught using direct instruction.

However, within the standards, the gains on the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction and the change perceived self-concept scores for HSP students who preferred working alone taught using direct instruction was negative and statistically not significant on evidence-based practice. This suggests that, on evidence-based practice, the change perceived self-concept scores for HSP students who preferred working alone taught using direct was greater than the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction.

Qualitative findings revealed that HSP students who preferred working alone in the participatory and direct instructions expressed their feeling in their journal reflections as follow: PC reported that “For the EBP assignment, I found it extremely helpful… 333

However, I do not feel confident I could duplicate the experience that easily because I myself was not familiar with the ins and outs…We had a nice conversation about group dynamic, but ultimately decided we experienced it more during our undergraduate career.

I only half agreed because I have experienced discontent among groups since I have been in graduate school.” DB also reported that “I enjoyed hearing everyone’s perspectives on evidence-based practice in their field. The lecture was easy to follow and informative.

Reading the PowerPoint before class helped me follow along. There are nothings that I can say I do not like about the class. If I had to say anything it would be using new apps.

However, I do not dislike it – it is simply different and will take getting used to. I have never extensively used an iPad before, so this will provide a good opportunity to learn how.”

These findings were consistent with previous research studies. For example,

Knowles et al. (2012) note that individuals have different team preferences on learning activities, and these team preferences significantly improve learning, and then understanding individual preference may effectively improve learning in practice.

Ducette et al. (1996) note, students enter a learning situation with different preferences that affect their learning. Knowledge of team preference will improve students’ self- concept and achievement (Dunn et al., 1989; Gregorc & Butler, 1984). Self-concept can directly predict final perceived achievement (Trautwein et al., 2006). Trautwein et al. note that separating students can inversely affect their students’ self-concept that can also inversely affect their final perceived achievement. Working in teams requires students to engage in teamwork (working collaboratively), communication (communicating the work 334 of the group to outsiders), and reasoning (reasoning to solve problems) processes

(Schanks, 1993).

Hypothesis 2c. Statistically, there was no significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught using the participatory instruction. It implies that the gain on change in overall perceived achievement score does not exist, and then the team preference with participatory instruction group does not matter.

As a matter of fact, none of the gains on change perceived self-concepts was statistically significant; however, there were some positive gains on change perceived self-concepts, patient-centered care, t(38) = 0.57, p = .57, d = 0.19, 95% CI [-0.53, 0.95]; evidence-based practice, t(22.9) = 1.19, p = .25, d = 0.40, 95% CI [-0.34, 1.25]; and informatics, t(38) = 0.98, p = .33, d = 0.31, 95% CI [-0.46, 1.34]; and some negative gains on change perceived self-concepts, interdisciplinary teamwork, t(38) = -0.84, p =

.35, d = -0.30, 95% CI [-1.25, 0.46]; and quality improvement, t(23.2) = -1.03, p = .31 d

= -0.35, 95% CI [-1.75, 0.58]. The gain on the change in overall perceived achievement score for HSP students who preferred working in teams taught using participatory instruction and the change in overall perceived achievement score for HSP students who preferred working alone taught using participatory instruction was positive and statistically not significant. This suggests that the change in overall perceived achievement score for HSP students who preferred working in teams taught using 335 participatory instruction was greater than the change in overall perceived achievement score for HSP students who preferred working alone taught using participatory instruction.

In fact, within the standards, the gains on the change perceived self-concept scores for HSP students who preferred working in teams taught using participatory instruction and the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction was positive and statistically not significant on patient-centered care, evidence-based practice, and informatics. This suggests that, on patient-centered care, evidence-based practice, and informatics, the change perceived self-concept scores for HSP students who preferred working in teams taught using participatory instruction was greater than the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction. However, within the standards, the gains on the change perceived self- concept scores for HSP students who preferred working in teams taught using participatory instruction and the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction was negative and statistically not significant on interdisciplinary teamwork, and quality improvement. This suggests that, on interdisciplinary teamwork, and quality improvement, the change perceived self-concept scores for HSP students who preferred working alone taught using participatory instruction was greater than the change perceived self-concept scores for

HSP students who preferred working in teams taught using participatory instruction. 336

A multiple case study analysis revealed that HSP students who preferred working in team and working alone in the participatory instruction expressed their feeling in their journal reflections as follow: PB reported on interdisciplinary teamwork that “I felt bad that [instructor] seemed to feel bad that we were all really stressed out. He calmed my nerves a lot just by saying that this project is not meant to be a giant stressful assignment and that everyone was going to get it done. I’m glad that [instructor] at least cares that we are stressed and was willing to work with us on the requirements because a lot of my professors could care less if their assignments are too much of if we are not going to have time to get everything done.” Similarly, PB reported on quality improvement that

“Working on the HIPAA apps didn't work very well for my group because both of our assigned apps were very highly protected and we didn't have access to them yet. It was kind of interesting to see that they were highly protected though, because that makes me believe it might actually be HIPPA protected.” However, PC on interdisciplinary teamwork reported that “Honestly, the majority of doctors I see aren’t extremely personable—I don’t mean to stereotype, but they are in and out without much room for conversation not about my body and symptoms. Sometimes I think we put too much pressure on physicians to give us the right answer, the answer we want, and also be a

‘best friend’ while doing it.” In addition, PC on quality improvement reported that “The patient safety assignment was a little challenging because there was nothing that jumped out at me in terms of safety.”

These findings were consistent with previous research studies. Gropper (1983a) notes that when two instructional theories and models address the same objectives, their 337 similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Gropper notes that when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Mastery learning is another effective way to improve both achievement and self-concept, as it is based on the assumption that all students can reach a high level of competence, if the right action is taken and enough time is allowed (Silvernail, 1987). Moore (2004) notes that designing instructional sequence helps students gain deep understanding. These findings have implications for curriculum design, instruction and learning styles. More research would be needed to investigate the effectiveness of participatory instruction with IOM standards.

Hypothesis 2d. The quantitative analysis revealed that there was no statistically significant gain between the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using a participatory instruction and the change in overall perceived achievement scores for the HSP students who preferred working alone taught without using the participatory instruction. Therefore, the null hypothesis was not rejected. It implies that the gain on change in overall perceived achievement score does not exist, and then the team preference with direct instruction group does not matter.

As a matter of fact, none of the gains on change perceived self-concepts was statistically significant; however, there were some positive gains on change perceived self-concepts, patient-centered care, t(11.9) = 0.08, p = .94, d = 0.04, 95% CI [-0.91, 338

0.97]; and quality improvement, t(48) = 0.98, p = .33 d = 0.36, 95% CI [0.53, 1.53]; and some negative gains on change perceived self-concepts, interdisciplinary teamwork, t(48)

= -0.90, p = .38, d = -0.31, 95% CI [-1.03, 0.40]; evidence-based practice, t(11.8) = -0.71, p = .49, d = -0.28, 95% CI [-1.18, 0.60]; and informatics, t(48) = -0.56, p = .58, d = -0.20,

95% CI [-1.27, 0.71]. The positive gains mean that the change scores for HSP students who preferred working in teams was greater than the change scores for HSP students who preferred working alone both taught without using the participatory instruction.

However, the negative gains mean that the change score for HSP students who preferred working in teams was less than the change scores for HSP students who preferred working alone both taught without using the participatory instruction. In other words, the negative gains mean that the change scores for HSP students who preferred working alone was greater than the change scores for HSP students who preferred working in teams both taught without using the participatory instruction. The gain on the change in overall perceived achievement score for HSP students who preferred working in teams taught using direct instruction and the change in overall perceived achievement score for

HSP students who preferred working alone taught using direct instruction was negative and statistically not significant. This suggests that the change in overall perceived achievement score for HSP students who preferred working alone taught using direct instruction was greater than the change in overall perceived achievement score for HSP students who preferred working in teams taught using direct instruction.

In fact, within the standards, the gains on the change perceived self-concept score for HSP students who preferred working in teams taught using direct instruction and the 339 change perceived self-concept score for HSP students who preferred working alone taught using direct instruction was positive and statistically not significant on patient- centered care and quality improvement. This suggests that, on the patient-centered care and quality improvement, the change perceived self-concept scores for HSP students who preferred working in teams taught using direct instruction was greater than the change perceived self-concept score for HSP students who preferred working alone taught using direct instruction. However, within the standards, the gains on the change perceived self- concept scores for HSP students who preferred working in teams taught using direct instruction and the change perceived self-concept scores for HSP students who preferred working alone taught using direct instruction was negative and statistically not significant on interdisciplinary teamwork, evidence-based practice, and informatics. This suggests that, on interdisciplinary teamwork, evidence-based practice, and informatics, the change perceived self-concept scores for HSP students who preferred working alone taught using direct instruction was greater than the change perceived self-concept score for HSP students who preferred working in teams taught using direct instruction.

A multiple case study analysis revealed that HSP students who preferred working in team and those working alone in the direct instruction expressed their feelings in their journal reflections as follow: DA reported that “I am certain most of the professionals in the field today did not have a course in IP teams. I am confident that working with other professions will be an asset in my future career…. Finally, I would like to have an increased awareness on media pertinent to the healthcare field. Our society will be continuing to grow in social media now and in the future…. Another interesting fact I 340 found out was the difference between a nutritionist and dietician. I was unaware that the dietician had more schooling.” In addition, DC also reported that “Bringing an iPad into our inter-professional course seems like a great idea. The iPad already has so many wonderful apps loaded onto it. These apps definitely seem like they will benefit each of us with our own area of study and ultimately help with our group collaboration throughout this course…. The second class I think went very well and I was definitely more comfortable…. The discussions really helped me to understand the points that were being talked about and allowed me to connect the topics to each area of study that is represented in our class.” However, DB reported that “I always like meeting students in other professions, because even though we are in different disciplines, we all have the healthcare interest in common…. I enjoyed hearing everyone’s perspectives on evidence- based practice in their field. The lecture was easy to follow and informative. Reading the

PowerPoint before class helped me follow along…. I have never extensively used an iPad before, so this will provide a good opportunity to learn how.”

These findings were consistent with previous research studies. Trautwein et al.

(2006) note separating students could have an inverse effect on their self-concept that could also lead to an inverse effect on their self-esteem and academic achievement.

Gropper (1983a) notes that when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Gropper notes that when two instructional theories and models address the same objectives, their similarities far outnumber their differences, and hence integration will be more useful than elimination (p. 48). Mastery learning is 341 another effective way to improve both achievement and self-concept, as it is based on the assumption that all students can reach a high level of competence, if the right action is taken and enough time is allowed (Silvernail, 1987). Moore (2004) notes that designing instructional sequences helps students gain deep understanding.

Research question 3. The third research question was, How does the participatory instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated, and one-way ANOVA and ANCOVA were conducted. Statistically, there was no significant difference among the final perceived achievement scores for the HSP students from various majors taught using a participatory instruction and the final perceived achievement scores for the HSP students from various majors taught using the participatory instructions, controlling for their initial perceived achievement scores.

Thus, participatory instruction had a positive but not statistically significant effect on

HSP students’ final perceived achievement with major, controlling for their initial perceived achievement scores. This implies that the statistical difference does not exist, and that participatory instruction with professional teams does not matter.

From Tables 16 and 18, the F for the main effect of participatory instruction in the

ANCOVA (0.88) was smaller than the F when initial perceived achievement was not statistically controlled (0.95), because controlling for the variance associated with initial perceived achievement increased the size of the within-group error variance. Similarly, the effects of participatory instruction with major on final perceived achievement score 342 appeared to be somewhat weaker (.12 vs .12). These results suggest that students’ majors do not moderate the effectiveness of participatory instruction on the students’ final perceived achievement score. This is an interesting outcome. It would appear that students’ majors of participatory instruction uniquely explained 12% of students’ final perceived achievement score. It would be advisable to compare this uniquely explained proportion with the uniquely explained proportion of students’ majors of direct instruction in Research question 4 (see Tables 19 and 24). It would appear that a participatory instruction might be a better instructional strategy for teaching students in their major teams.

These findings were consistent with previous research studies (Bodner et al.,

2014; Dahllöf, 1971; Felder & Brent, 2000; Slavin, 1990; Oakley et al., 2004). Dahllöf

(1971) notes grouping has little or no bearing on learners’ achievement. Dahllöf stresses that achievement is not a direct function of the grouping but the actual teaching process, the general style of instruction, the teachers and their competence (p. 4). Bodner et al.

(2014) note that “cooperative learning” may improve students’ final achievement, enhance students’ self-esteem (p. 142); and that the significant element in cooperative learning is creating an interactive environment. Cooperative learning strategies are effective, provide wide outcomes, enhance achievement, and improve intergroup relations (Slavin, 1990). Slavin notes that cooperative learning methods can be instructionally effective means of increasing students’ achievement when they use group goals and individual accountability (p. 32). Slavin (1990) concludes that cooperative learning strategies improve student’s achievement. Cooperative learning directly affects 343 almost every learning outcome (Oakley et al., 2004). However, the findings of this study was inconsistent with Slavin’s (1990) claim that “about two-thirds of the time there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p. 53). These findings have implications for IP team, instruction and learning styles.

Research question 4. The fourth research question was, How does the direct instruction of IOM standards on a group module project affect students’ final perceived achievement scores in their majors, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated, and one-way

ANOVA and ANCOVA were conducted. Statistically, there was no significant difference among the final perceived achievement scores for the HSP students from various majors taught using a direct instruction and the final perceived achievement scores for the HSP students from various majors taught using the direct instructions, controlling for their initial perceived achievement scores. Thus, direct instruction had a positive but statistically not significant effect on HSP students’ final perceived achievement with major, controlling for their initial perceived achievement scores. These results suggest that students’ majors do not moderate the effectiveness of direct instruction on the students’ final perceived achievement score. This is an interesting outcome. It would appear that students’ majors of direct instruction uniquely explained

11% of students’ final perceived achievement score. It would be advisable to compare this uniquely explained proportion with the uniquely explained proportion of students’ majors of participatory instruction in Research question 3 (see Tables 19 and 24). It 344 would appear that a participatory instruction might be a better instructional strategy for teaching professional teams.

From Tables 22 and 24, the F for the main effect of participatory instruction in the

ANCOVA (1.03) was smaller than the F when initial perceived achievement was not statistically controlled (0.72), because controlling for the variance associated with initial perceived achievement decreased the size of the within-group error variance. Similarly, the effects of direct instruction with major on final perceived achievement score appeared to be somewhat stronger (.08 vs .11).

These findings were consistent with previous research studies (Bodner et al.,

2014; Dahllöf, 1971; Felder & Brent, 2000; Slavin, 1990; Oakley et al., 2004). Dahllöf

(1971) notes grouping has little or no bearing on learners’ achievement. Dahllöf stresses that achievement cannot be a direct outcome of the grouping arrangements but the actual teaching process, the general style of instruction, the teachers and their competence (p.

4). Bodner et al. (2014) note that cooperative learning may improve student achievement, enhance students’ self-esteem (p. 142); and that the significant element in cooperative learning is creating an interactive environment. Cooperative learning strategies are effective, provide wide outcomes, enhance achievement, and improve intergroup relations (Slavin, 1990). Slavin notes that cooperative learning methods can be instructionally effective means of increasing students’ achievement when they use group goals and individual accountability (p. 32). Like the cooperative learning, participatory learning also uses group goals, individual accountability, and group accountability. Slavin (1990) concludes that cooperative learning strategies improve 345 student’s achievement. Cooperative learning can directly affect every learning outcome

(Oakley et al., 2004). However, the findings of this study was inconsistent with Slavin’s

(1990) claim that “about two-thirds of the time there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p.

53). These findings have implications for IP team, instruction and learning styles.

Research question 5. The fifth research question was, What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated and partial correlation and hierarchical multiple regression were conducted. Statistically, the final perceived achievement score for group of HSP students from various majors taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. It would appear that participatory instruction positively correlated significantly with HSP students’ final perceived achievement score; whereas direct instruction negatively correlated significantly with

HSP students’ final perceived achievement score. The dummy variables for students’ major did not appear to significantly relate to HSP students’ perceived achievement score. The possible explanation for these results may be when participatory instruction tends to increase as the students’ final perceived achievement score increases. These 346 results suggest that participatory instruction would be used to increase HSP students’ team preference perceived collective-efficacy.

From Tables 26 and 27, the relation between the final perceived achievement score and the participatory instruction indicated that the partial correlation coefficient (rp

= .37) was considerably less than the correlation coefficient (r = .39) when the effect of initial perceived achievement score was not controlled for. In fact, the correlation coefficient was almost the same as what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between final perceived achievement score and participatory instruction increased. In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction now shared 14% of the variance in final perceived achievement

(compared to 15% when initial perceived achievement score was not controlled). In addition, the partial correlation between final perceived achievement score and direct instruction was -.37 which was considerably greater than the correlation when the effect of initial perceived achievement score was not controlled for (r = -.28). In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between final perceived achievement score and direct instruction decreased. In terms of variance, the value of R2 for the partial correlation was .14, which means that direct instruction shared

14% of the variance in final perceived achievement score (compared to 8% when initial perceived achievement score was not controlled). 347

The results in Table 28 revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .011 and accounted for 7% of the variation in the final perceived achievement. In Step 2, introducing the major dummy variables explained an additional 5% of variation in the final perceived achievement and this change in 푅2 was not significant, F(5, 83) = 1.00, p = .43. In Step

3, adding instructional types to the regression model explained an additional 12% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 82) =

12.37, p = .001. When all three independent variables were included in Step 3 of the regression model, none of the major variables was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 12% of the variation in final perceived achievement. Together the three independent variables accounted for 24% of the variance in final perceived achievement.

These findings were consistent with previous research studies (Bandura, 1982;

Gropper, 1983a; Slavin, 1990). Gropper (1983a) notes, correlations between achievement and instruction were most significant; and the relationship between instruction and achievement should hold (Gropper, 1983a, p. 48). Cooperative learning strategies have positively influence achievement outcomes (Slavin, 1990). Bandura

(1982) notes, “knowledge of personal efficacy is related to a perceived group efficacy” and “collective efficacy is rooted in self-efficacy” (p. 142).

Research question 6. The sixth research question was, What instructional strategy provides significant instructional impact on HSP students’ final perceived 348 achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated, and partial correlation and hierarchical multiple regression were conducted. Statistically, the final perceived achievement score for group of HSP students from various team preference taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores in their team preference with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. It would appear that participatory instruction positively correlated significantly with HSP students’ final perceived achievement score; whereas direct instruction negatively correlated significantly with HSP students’ final perceived achievement score.

The dummy variables for students’ team preferences did not appear to significantly relate to HSP students’ perceived achievement score. The possible explanation for these results may be when participatory instruction tends to increase as the students’ final perceived achievement score increases. These results suggest that participatory instruction would be used to increase HSP students’ team preference perceived collective efficacy.

From Tables 29 and 30, the relation between the final perceived achievement score and the participatory instruction indicated that the partial correlation (rp = .37) was considerably less than the correlation (r = .39) when the effect of initial perceived achievement score was not controlled for. In fact, the correlation coefficient was almost the same as what it was before. Although this correlation was still statistically significant 349

(its p-value was still below .05), the relationship between final perceived achievement score and participatory instruction increased.

In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction now shared 14% of the variance in final perceived achievement (compared to 15% when initial perceived achievement score was not controlled). In addition, the partial correlation between final perceived achievement score and direct instruction was -.37 which was considerably greater than the correlation when the effect of initial perceived achievement score was not controlled for (r = -.28).

In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between final perceived achievement score and direct instruction decreased.

In terms of variance, the value of R2 for the partial correlation was .14, which means that direct instruction shared 14% of the variance in final perceived achievement score

(compared to 8% when initial perceived achievement score was not controlled).

The results in Table 31 revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .011 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing the team preference dummy variables explained an additional 0% of variation in final perceived achievement and this change in 푅2 was not significant, F(1, 87) = 0.43, p = .52. In Step

3, adding instructional types to the regression model explained an additional 12% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 86) =

12.99, p = .001. When all three independent variables were included in Step 3 of the 350 regression model, none of the team preference variables was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 12% of the variation in final perceived achievement. Together the three independent variables accounted for 24% of the variance in final perceived achievement. These findings were consistent with previous research studies (Gropper, 1983a; Slavin, 1990). Gropper (1983a) notes, that correlations between achievement and instruction were most significant; and that the relationship between instruction and achievement should hold (Gropper, 1983a, p. 48).

Cooperative learning strategies have positively influence achievement outcomes (Slavin,

1990). These findings have implications for instruction and learning styles.

Research question 7. The seventh research question was, How does a participatory instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP team, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated, and one-way ANOVA and ANCOVA were conducted. Statistically, there was no significant difference among the final perceived achievement scores for the HSP students from various IP team taught using a participatory instruction and the final perceived achievement scores for the HSP students from various IP team taught using the participatory instructions, controlling for their initial perceived achievement scores.

Thus, participatory instruction had a positive but not statistically significant effect on

HSP students’ final perceived achievement with IP team, controlling initial perceived achievement score. These results suggest that students’ IP teams do not moderate the 351 effectiveness of participatory instruction on the students’ final perceived achievement score. This is a potential area for future attention. It would appear that students’ IP teams of participatory instruction uniquely explained 27% of students’ final perceived achievement score. It would be advisable to compare this uniquely explained proportion with the uniquely explained proportion of students’ IP teams of direct instruction in

Research question 8 (see Tables 35 and 40). It would appear that a participatory instruction might be a better instructional strategy for teaching professional teams.

From Tables 33 and 35, the F for the main effect of participatory instruction in the

ANCOVA (1.39) was greater than the F when initial perceived achievement was not statistically controlled (1.10), because controlling for the variance associated with initial perceived achievement decreased the size of the within-group error variance. Similarly, the effects of participatory instruction with IP team on final perceived achievement score appeared to be somewhat stronger (.22 vs .27).

These findings were consistent with previous research studies (Bodner et al.,

2014; Dahllöf, 1971; Felder & Brent, 2000; Oakley et al., 2004; Slavin, 1990). Dahllöf

(1971) notes grouping has little or no bearing on learners’ achievement. Dahllöf stresses that achievement must never be regarded as a direct outcome of the grouping arrangements rather the actual teaching process, the general style of instruction, the teachers and their competence (p. 4). Bodner et al. (2014) note that cooperative learning may improve student achievement, enhance students’ self-esteem (p. 142); and that the most important element in cooperative learning is to create an interactive environment.

Cooperative learning strategies are effective, provide wide outcomes, enhance 352 achievement, and improve intergroup relations (Slavin, 1990). Slavin notes that cooperative learning methods can be instructionally effective means of increasing students’ achievement when they use group goals and individual accountability (p. 32).

Slavin (1990) concludes that cooperative learning strategies improve student’s achievement. Cooperative learning has direct effects on almost every learning outcome

(Oakley et al., 2004). However, the findings of this study was inconsistent with Slavin’s

(1990) claim that “about two-thirds of the time there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p.

53). These findings have implications for IP team, instruction and learning styles.

Research question 8. The eighth research question was, How does the direct instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated; and

ANOVA and ANCOVA were conducted. Statistically, there was no significant difference among the final perceived achievement scores for the HSP students from various IP team taught using a direct instruction and the final perceived achievement scores for the HSP students from various IP team taught using the direct instruction, controlling for their initial perceived achievement scores. Thus, direct instruction had a positive but not statistically significant effect on HSP students’ final perceived achievement with IP team, controlling initial perceived achievement score. These results suggest that students’ majors do not moderate the effectiveness of direct instruction on the students’ final perceived achievement score. This is a potential area for future 353 attention. It would appear that students’ majors of direct instruction uniquely explained

15% of students’ final perceived achievement score. It would be advisable to compare this uniquely explained proportion with the uniquely explained proportion of students’ majors of participatory instruction in Research question 7 (see Tables 35 and 40). It would appear that a participatory instruction might be a better instructional strategy for teaching professional teams.

From Tables 38 and 40, the F for the main effect of direct instruction in the

ANCOVA (0.91) was smaller than the F when initial perceived achievement was not statistically controlled (1.51), because controlling for the variance associated with initial perceived achievement increased the size of the within-group error variance. Similarly, the effects of direct instruction with IP team on final perceived achievement score appeared to be somewhat weaker (.23 vs .15).

These findings were consistent with previous research studies (Bodner et al.,

2014; Dahllöf, 1971; Felder & Brent, 2000; Oakley et al., 2004; Slavin, 1990). Dahllöf

(1971) notes grouping has little or no bearing on learners’ achievement. Dahllöf stresses that achievement cannot be a direct outcome of the grouping arrangements but the actual teaching process, the general instruction style, the teachers and their competence (p. 4).

Bodner et al. (2014) note that cooperative learning may improve student achievement, enhance students’ self-esteem (p. 142); and that the significant element in cooperative learning is to create an interactive environment. Cooperative learning strategies are effective, provide wide outcomes, enhance achievement, and improve intergroup relations (Slavin, 1990). Slavin notes that cooperative learning methods can be 354 instructionally effective means of increasing students’ achievement when they use group goals and individual accountability (p. 32). Slavin (1990) concludes that cooperative learning strategies improve student’s achievement. Cooperative learning has direct effects on almost every learning outcome (Oakley et al., 2004). However, the findings of this study was inconsistent with Slavin’s (1990) claim that “about two-thirds of the time there will be a significant difference between the experimental and the control groups in favor of the experimental groups” (p. 53). These findings have implications for IP team, instruction and learning styles.

Research question 9. The ninth research question was, What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores? In order to answer this question, a hypothesis was formulated; and partial correlation and hierarchical multiple regression were conducted. Statistically, the final perceived achievement score for group of HSP students from various IP team taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores in their IP team with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. It would appear that participatory instruction positively correlated significantly with HSP students’ final perceived achievement score; whereas direct instruction negatively correlated significantly with HSP students’ final perceived achievement score. The dummy variables for students’ IP teams did not appear to significantly relate to HSP 355 students’ perceived achievement score. The possible explanation for these results may be when participatory instruction tends to increase as the students’ final perceived achievement score increases. These results suggest that participatory instruction would be used to increase HSP students’ IP perceived collective efficacy.

From Tables 42 and 43 the relation between the final perceived achievement score and the participatory instruction indicated that the partial correlation (rp = .37, p = .001) was considerably less than the correlation (r = .39, p = .001) when the effect of initial perceived achievement score was not controlled for. In fact, the correlation coefficient was almost the same as what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between final perceived achievement score and participatory instruction increased.

In terms of variance, the value of R2 for the partial correlation was .14, which means that participatory instruction now shared 14% of the variance in final perceived achievement (compared to 15% when initial perceived achievement score was not controlled). In addition, the partial correlation between final perceived achievement score and direct instruction was -.37 which was considerably greater than the correlation when the effect of initial perceived achievement score was not controlled for (r = -.28).

In fact, the correlation coefficient was 1.3 times what it was before. Although this correlation was still statistically significant (its p-value was still below .05), the relationship between final perceived achievement score and direct instruction decreased.

In terms of variance, the value of R2 for the partial correlation was .14, which means that 356 direct instruction shared 14% of the variance in final perceived achievement score

(compared to 8% when initial perceived achievement score was not controlled).

The results in Table 44 revealed that at Step 1, initial perceived achievement contributed significantly to the regression model, F(1, 88) = 6.73, p = .011 and accounted for 7% of the variation in final perceived achievement. In Step 2, introducing the IP team dummy variables explained an additional 8% of variation in final perceived achievement and this change in 푅2 was not significant, F(8, 80) = 0.99, p = .45. In Step 3, adding instructional types to the regression model explained an additional 14% of the variation in final perceived achievement and this change in 푅2was significant, F(1, 79) = 15.17, p

= .001. When all three independent variables were included in Step 3 of the regression model, none of the IP team variables was significant predictor of final perceived achievement. The most important predictor of final perceived achievement was participatory instruction which uniquely explained 14% of the variation in final perceived achievement. Together the three independent variables accounted for 29% of the variance in final perceived achievement. These findings were consistent with previous research studies (Gropper, 1983a; Slavin, 1990). Gropper (1983a) notes, “correlations between achievement and instruction were most significant”; and “the relationship between instruction and achievement should hold” (Gropper, 1983a, p. 48). Cooperative learning strategies have positively influence achievement outcomes (Slavin, 1990).

These findings have implications for instruction and learning styles. 357

Limitation

Participatory instruction so far showed a great impact on students’ perceived achievement with specific large limitations. Within the classroom setting, the methodology of the instructor plays a crucial role in student achievement; however, as

Gropper (1983a) explains, “A few treatment types are universally applicable and can be varied to provide differential amounts of attention to match differential needs created by subject matter and learner characteristics” (p. 42). There are many factors that affect student achievement that the instructor and the researcher cannot control (Gropper,

1983a). Factors such as students’ initial perceived achievement levels (perceived self- efficacy), initial perceived self-concept levels, learning styles, preferences, strategies, teamwork experience, communication skills, discussion skills, negotiation skills in IOM standards may be intervening variables as the external validity for this study. Each instructional method produced varied effects, depending on the initial perceived self- concept levels and initial perceived achievement levels. The results from the pre-survey data analysis reflected the initial perceived achievement of the students or the experiences individual students brought into the study.

The researcher used six intact classes. Using a convenience sample such as this has limitations that might negatively affect the results. Instructor effects on students can interact with instructional methods, influencing students learning. These effects might include the instructors’ knowledge and their experiences with teaching IOM standards, covering the curriculum being assessed at the end of the course, establishing rapport with the students, and managing the classroom practices. Any of these effects can interact 358 with instructional methods increasing or decreasing students’ final perceived achievement.

Certain limitations, such as choice of instrumentation, are intentionally entered into the research design. These limitations are referred to as delimitations and also serve as threats to external validity in this study. The choice of HSP 4510/5510 and the module project falls into this category. The decision to use HSP 4510/5510 course and module project as delimitation within this design was based on the fact that HSP students are from the School of Health Sciences and Professions and that they may have some experiences on IOM standards in their various fields.

The choice to use a researcher-developed survey is delimitation for this study.

The survey was developed based on categories from IOM standards. Choices regarding the definition for each category and the placement of instructional methods under each category were the researcher’s responsibility. The survey is a self-reported instrument with five item questions based on the IOM standards; thus, it may be considered subjective in nature.

Another limitation is related to the scale used to measure the students’ self- concepts on the standards which has low reliability on some of the constructs. The scale could have influenced the results of the analyses. This low reliability might also affect the causality of correlation results. Further, causal links between the independent variables (instructional types) and the dependent variable (final perceived achievement score) should not be inferred. 359

The researcher has detected violations of the homogeneity of variance assumption. This may “lead to inflated or deflated risk of type I error for the independent samples t-test” (Warner, 2013, p. 192) and for the one-way ANOVA (Warner, 2013). It might be appropriate using different “alpha level for this test at .01 or even .001 (instead of .05)” (Warner, 2013, p. 192).

Finally, the participatory group consisted of three cohort groups of academic year

2013/2014. The direct group also consisted of three cohort groups of 2014/2015 academic year were selected as the population for this study. This has been done to add credits to the process, having two different groups experiencing conditions in fall, spring, and summer.

The results from the analyses conveyed the extent and manner in which instructional method categories explaining variance in the final perceived achievement mean scores in this sample, but not the overall population. The assumption that the shared variances between independent variables and the dependent variable in the population is zero was overcome because R2 values were statistically significant at p <

.05. Further, causal links between the independent variables and the dependent variable must be inferred. In other words, participatory instruction did “cause” 24 % of the variance in students’ final perceived achievement mean scores with professional teams

(majors), and 29 % of the variance in students’ final perceived achievement mean scores with inter-professional teams for this sample of 90 HSP students. 360

Implications

The choice of the design is based on a theoretical framework that explains the relationship(s) between perceived achievement levels and instruction. That is the case in this study. Gropper’s (1983a) metatheory of instruction was employed to throw more light on instructional teaching method types. This was a methodical attempt to help educators make practical decision about research-based instructional teaching methods that may increase their students’ perceived achievement levels and self-concept levels on

IOM standards. In light of Gropper’s metatheory of instruction, there are clear implications of the results of this study. Instructional methods and students’ team preferences, specifically working in teams can influence their learning as it is measured by IOM Self-Reported Knowledge Achievement survey.

Implications for teaching/instruction. The HSP course in this study provided a participatory learning environment for undergraduate and graduate students at Ohio

University to assess the effectiveness of the activities that had been done in the previous semesters, make suggestions for improvements, discuss the details of the activities for the final project, assign roles for each member, share experiences with team members so as to participate in the activities. As such, the course succeeded in its goal stated in the syllabus (IOM standards). This model worked well for the HSP students, resulting in the quality of their final group module project. It is a model that is recommended for use by

Institute of Medicine wishing to help their HSP students develop deeper understanding of inter-professions quality patient care delivery at health settings. In order to replicate this study in other universities or schools, the following teaching implications are 361 recommended. Readers should be aware that following these suggestions do not guarantee that students will exhibit all the characteristics of IOM standards noted in this study.

Many of these implications focus on the importance of planning carefully for the course. Health professions instructors may want to a) provide HSP students with opportunity for planning their own activities in teams so that they can socialize and get to know each other’s professions roles; and b) provide HSP students with opportunity to build their team projects that have a certain amount of complexity that requires negotiation and research, but that are not overly complex so that HSP students experience too much frustration. Other teaching implications are related to learning has social components, group work is valuable: a) Health professions instructors may choose a student from each of the fields to form a team, b) Health professions instructors may provide real-world opportunities for HSP students to apply (share, or adapt) the new knowledge and experience, and c) Health professions instructors may consider group products and group processes.

Teamwork implications. For education, the health professions curriculum planers should include content and opportunities that foster effective IP teamwork into health sciences and professions, and “continuing education so that health professionals can learn together, with, from, and about each other and their respective roles”

(DiGiovanni & McCarthy, in press; Sargeant et al., 2008). The health sciences and professions curriculum should be designed to foster critical thinking skills, teamwork skills, simulation, multitasking, judgment, networking, collective intelligence, and 362 negotiation. The curriculum should foster orientation of courses around real-world problems that allow students to employ discussion, experience, and theories. For accrediting bodies, health sciences and professions educational interventions should be aligned to promote teamwork and to integrate content from other disciplines formally

(DiGiovanni & McCarthy, in press; Sargeant, et al., 2008).

McCallin (2005) notes, health care practice in team includes “parallel, consultative, collaborative, coordinated, multidisciplinary, interdisciplinary, and integrative” approaches. McCallin notes, the context may change the concept’s understanding; what works well in one service may not work well in another service or country. McCallin asserts that leaders, resources, and environments may also impact new methods. McCallin reports, health professionals working well together improve client outcomes and job satisfaction; health professionals communicating and collaborating effectively may benefit the patient and provider. McCallin argues that developing IP practice needs a commitment to participate in sharing learning and dialogue; and that dialogue fosters IP learning through negotiating meaning and rediscovering deeper meanings for collaboration. In this study, health professions instructor may create opportunities for HSP students to engage in shared learning in their professional and inter-professional teams. The instructor may create opportunities for HSP students to negotiate roles in inter-professional teams, work together in inter-professional teams, and practice together in inter-professional teams.

Implication for learning styles. Ducette et al. (1996) note, a) students have different learning styles, preferences, and capacities; b) these differences in styles, 363 preferences, and capacities affect their learning; and c) matching learner and learning environment facilitates learning for the learner. Ducette et al. note that instructor should create opportunities that require multiple preferences; and that inform multiple approaches for different learners to understand (Ducette et al., 1996). McDaniel (1995) notes, different training, theories, and working styles affect communication and accessibility. Bray and Rogers (1995) provide some challenges of learning styles, including a) different theories; b) language limitations; c) practice styles limitations; d) inaccessibility of different providers; and e) different expectations for assessment and treatment.

Pashler et al. (2009) argue that “the existence of preferences says nothing about what these preferences might mean or imply for anything else, much less whether it is sensible for educators to take account of these preferences” (p. 108). Pashler et al. note that using learning styles help educators to save time and money. According to

Robertson et al. (2011), the key messages for learning styles include a) “Personal awareness of learning styles and confidence in communicating this are first steps to achieving an optimal learning environment;” and b) “A conversation about learning styles between fieldwork supervisor and student enhances the fieldwork experience” (p.

39).

Implication for instructors. Chiu (2004) notes that students could not always solve problems; that instructors should identify them, evaluate their work, provide with content supports, and use minimal guidelines to better their performance (Chiu, 2004). In this study, health professions instructor should monitor and identify those HSP students 364 who were showing low self-concepts and self-efficacy on the standards and teach them how to increase their self-concepts and self-efficacy on the standards.

Implications for students being paid. Students were paid for their participation in this project. That could mean that students were more disposed to rate the course or their participation more favorably as a result. There are several reasons why payment does not appear to be an issue reflecting performance in this study. First, at the outset the

HSP students were paid for the class. They were not paid contingent on completion.

Since they had already been paid prior to completing any surveys it seemed less likely they might put a response in anticipation of payment. Second, students also gave feedback about the level of work required in the course and rated it as excessive on course evaluations. This might indicate that students were critical consumers of the course and were not necessarily responding in a socially desirable way.

Recommendations

Recommendations for health professions instructors. To teach effectively, health professions instructors should be able to modify their classroom instruction toward participatory learning instruction. They should teach IOM standards in a way HSP students viewed learning the standards meaningful and significant. Health professions instructors should be able to adjust instructional strategies to match with changes in the teaching and learning situation of HSP students. Direct instruction teaching method apparently makes less gain in terms of effect sizes in this study. Direct instruction may best serve as a way to enhance participatory instruction through integration of IOM standards in group module projects. In this study, the use of direct instruction for 365 delivering IOM standards should not be the foundation of instruction, nor should be employed outside the context of the participatory instruction.

To improve HSP students’ self-concept levels on IOM standards for better perceived achievement, health professions instructors should explain explicitly from the beginning of first class meeting that they are interested in every HSP student as a unique person, and that they are committed to walk students towards self-realization rather than destroy them; health professions instructors should state explicitly what they expect from their students, and create opportunities for the students setting their own aspirations; and health professions instructors should show interest in their students’ needs, and encourage them to express their ideas and opinions, in an environment based on interaction and collaboration where enthusiasm and humor are present.

Improving HSP students’ self-concept levels on IOM standards for better perceived achievement, health professions instructors should teach their students to clearly define goals, analyze them and find out the steps of their realization, set the time limits, take into consideration the possible obstacles and the solutions and how to evaluate and monitor progress; health professions instructors should teach HSP students self-evaluation which is an effective way to enhance self-concept, and through self- evaluation, HSP students learn to describe themselves objectively, and identify the behavior they want to change. Furthermore, health professions instructors should encourage HSP students to focus on the positive aspects about themselves, talk about their positive feelings, accomplishments, strengths, and what they are proud of, and share their success with others; this will boost their self-concept and make them feel good 366 about themselves; and health professions instructors should encourage HSP students to talk about their negative feelings and weaknesses. In addition, HSP students should get the feeling that their instructors have confidence in them, believe in their abilities to learn and to be competent, that instructor is not to judge them but to help them and assist them in their academic achievements and accomplishments. Health education curriculum planners should include and sequence the IOM standards in the curriculum in the following order of ease: interdisciplinary teamwork, patient-centered care, evidence- based practice, informatics, and quality improvement. More time should be allocated for teaching informatics and quality improvement. Health education curriculum planners should create space for health professions instructors to reassess their practice, and update their research knowledge and skills. Health professions instructors should emphasize informatics and quality improvement standards since the majority of HSP students had low effect sizes of change self-concept scores on the two standards.

Final Recommendation

Instructor should emphasize participatory instruction when the goal is to increase interdisciplinary students’ perceived achievement scores. Instructors should emphasize the desired learning outcomes (IOM standards). Instructors should make decisions on pacing of curriculum (IOM standards) so that HSP students in their inter-professional teams may have every opportunity to learn and practice together what they are expected to learn (IOM standards). Instructors must use teaching methods that fit both the concept

(standard) and the students. The students must learn the concept within the framework of participatory instruction. Instructors should assess whether HSP students are achieving 367 the instructional objectives of the IOM standards on group module project with peer and self-assessment methods. Health professions instructors should decide on adjusting the teaching methods based on assessment results.

Suggestions for Future Studies

This study focused on a group module project involving HSP students’ perceived achievement levels and self-concept levels on IOM standards. Other areas that address the limitations discussed in the context of this study deserve attention. In addition, the results from this study also suggest future research.

The research instrument developed for this study has experts’ content validation and high reliability coefficients adding to its strength as a research instrument. Further, it is belief that this study can build trust and confidences among health sciences and professions students. The health professions instructors can perform similar research where they choose participatory instruction, and examine their students’ perceived achievement scores. A future research can use the research instrument to study HSP students in Ohio on a large scale. Another future research can use the research instrument developed in this study adapting it to an observational scale. A future research can use an observation method in the IP teamwork setting, coding HSP students’ team behaviors according to the research instrument items’ categories and making a sound basis for either interviews or focus group discussions with the purpose of examining the factors that affect HSP students’ collaborative decision making. Another possibility for future research is to utilize a multiple choice, paper/pencil tests, self- concept and achievement test on IOM standards rather self-reported knowledge and skills 368 survey and compare the results. However, it would be difficult for an actual achievement test to be valid in this situation.

In these studies, age has not been used as a factor in the development of other inter-professional scales. Future research can investigate age as one of the demographic factors influencing HSP students’ perceived self-efficacy and their final perceived achievement; can employ the non-parametric Wilcoxon rank sum test that involves converting the scores to ranks instead of the independent samples t-test that involves raw scores if the assumption of homogeneity of variances is violated; and can employ the non-parametric Kruskal-Wallis H test that involves converting the scores to ranks instead of one-way ANOVA that involves the raw scores when the assumption of homogeneity of variances has been violated.

Another possible future research is to frame this study as exploratory study with the focus on the influence of instructional methods on individual reporting categories on the IOM standards. Another possible future research can discover which instructional methods influence HSP students’ perceived achievement levels for specific reporting categories that will assist healthcare instructors in the development of quality instruction for remediation as reported on peer assessment results. Another possibility for future research is to increase the time of the study to a full school academic year rather than a semester allowing HSP students to become experts on the IOM standards. Another possibility for future research is to investigate whether participatory instruction is more beneficial for low achieving HSP students than for higher achieving HSP students.

Another possible future research is examining the effects of participatory instruction on 369 the students’ perceived achievement levels and self-concept levels on IOM standards in the inter-professions schools. Another possibility for future research is to examine the influence the combination of direct and participatory instructional methods may have on

HSP students’ perceived achievement levels and self-concepts levels on the IOM standards.

Future research can investigate HSP students’ self-concepts of IOM standards as well as their self-efficacy levels of IOM standards. Another future research can investigate effects of participatory instruction on HSP students’ perceived achievement and self-concepts of IOM standards. Further, future research can investigate the effects of participatory instruction on perceived achievement and self-efficacy of HSP students on IOM standards.

The finding that there is a positive and significant relationship between perceived achievement and participatory instruction needs a future research. It may be important to determine whether the relationships between the final perceived achievement and participatory instruction in this study are replicated in other samples. The negative relationship found between direct instruction and final perceived achievement deserves special attention. Definitive conclusions cannot be drawn from a single study; therefore, replication of these results is essential to determine whether that relationship does exist between final perceived achievement and participatory instruction in the population. A larger sample of HSP students may provide more power for the statistical analyses.

Another possibility for future research is to replicate the use of participatory instruction 370 in the school of education, engineering, and business other than the school of health sciences and professions.

Conclusion

The results of the analysis confirmed five hypotheses out of nine. These findings are the following.

First, the gain on the change in overall perceived achievement scores for HSP students taught using a participatory instruction was statistically significant greater than the change in overall perceived achievement scores for HSP students taught without using participatory instruction.

Second, the gain on the change in overall perceived achievement scores for the

HSP students who preferred working in teams taught using participatory instruction was statistically significant greater than the change in overall perceived achievement scores for the HSP students who preferred working in teams taught without using participatory instruction.

Third, the final perceived achievement score for group of HSP students from various majors taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores with major regarding meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Fourth, the final perceived achievement score for group of HSP students with various team preferences taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores in their team 371 preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores.

Fifth, the final perceived achievement score for group of HSP students with various IP teams taught using participatory instruction had a positive and statistically significant impact on the final perceived achievement scores with IP teams regarding meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores. So in the classroom setting, participatory instruction can be instructionally effective when the common goal is to improve students’ inter- professional collaborative practice, collaborative problem solving, and democratic decision making; promote positive self-concepts on IOM standards of HSP students; and increase HSP students’ final perceived achievement in IOM standards. Direct instruction in a classroom setting can be used when the goal is to increase HSP students’ achievement and individual self-concepts.

Chapter Summary

This chapter discusses the major findings of the study and extends beyond these findings. First, the scope, data collection and analyses procedures were elaborated.

Second, research questions and their hypotheses were provided. Third, this chapter provided the list of the main findings for both the quantitative and qualitative data.

Fourth, the findings of each hypothesis were discussed, explained by findings of research question 10, and then supported by previous research findings. Five, this chapter discussed the findings’ implications for instruction, IP team, learning styles, preferences, 372 and curriculum. Finally, this chapter provided some recommendations, and suggestions for future research.

373

References

Adelman, H. S. (1995). : Broadening the focus. Psychological Science,

6, 61-62.

Agresti, A., & Finlay, B. (2008). Statistical methods for the social sciences (4th ed.).

Upper Saddle River, NJ: Prentice Hall.

Appel, K., Buckingham, E., Jodoin, K., & Roth, D. (2012). Participatory learning and

action toolkit: For application in BSR’s global programs. Paris, France: Jennifer

Schappert.

Argyrous, G. (2005). Statistics for research: With a guide to SPSS (2nd ed.). Thousand

Oaks, CA: SAGE Publication Ltd.

Asgari, S., & Dall’Alba, G. (2011). Improving group functioning in solving realistic

problems. International Journal for the Scholarship of Teaching and Learning,

5(1), 8.

Bandura, A., & Cervone, D. (1983). Self-evaluative and self-efficacy mechanisms

governing the motivational effects of goal systems. Journal of Personality and

Social Psychology, 45(5), 1017-1028.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavior change.

Psychological Review, 84(2), 191-215.

Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist,

37(2), 122-147.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.

Englewood Cliffs, NJ: Prentice Hall. 374

Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning.

Educational Psychologist, 28(2), 117-148.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.

Bandura, A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited.

Journal of Applied Psychology, 88(1), 87-99.

Barr, H. (1998). Competent to collaborate: Towards a competency-based model for

interprofessional education. Journal of Interprofessional Care, 12(2), 181–187.

Barr, H. (2009). Interprofessional education (3rd ed.). In J. A. Dent, & R. M. Harden

(Eds.), A practical guide for medical teachers (pp. 187-192). Edinburgh:

Churchill Livingstone Elsevier.

Barr, H., Koppel, I., Reeves, S., Hammick, M., & Freeth, D. S. (2008). Effective

interprofessional education: Argument, assumption and evidence (promoting

partnership for health). John Wiley & Sons. Retrieved from

https://books.google.com/books?hl=en&lr=&id=tdMrUOcHjMIC&oi=fnd&pg=P

P2&ots=qPIuxsBYeZ&sig=NFbp6duEWXPfn0DVGmzkaX6KUK4

Bart, M. (2011). Tips for creating a participatory classroom environment. Faculty Focus.

Retrieved from http://www.facultyfocus.com/articles/teaching-and-learning/tips-

for-creating-a-participatory-classroom-environment/

Bergan, J. (1995). Behavioral training and the new mental health: Are we learning what

we need to know? The Behavior Therapist, 18, 161-164.

Binder, H. (2013). The five biggest problems in healthcare today. Pharma and

Healthcare. Retrieved from 375

http://www.forbes.com/sites/leahbinder/2013/02/21/the-five-biggest-problems-in-

health-care-today/

Bloom, B. S. (1976). Human characteristics and school learning. New York, N.Y:

McGraw-Hill Book Company.

Bodner, G. M., Metz, P. A., & Casey, K. L. (2014). Twenty-five year experience with

interactive instruction in chemistry. In Learning with Understanding in the

Chemistry Classroom (pp. 63–74). Springer. Retrieved from

http://link.springer.com/chapter/10.1007/978-94-007-4366-3_8

Bray, J. H., & Rogers, J. C. (1995). Linking psychologists and family physicians for

collaborative practice. Professional Psychology: Research and Practice, 26(2),

132-138.

Brisolara, S. (1998). The history of participatory evaluation and current debates. In E.

Whitmore (Ed.), Understanding and practicing participatory evaluation (pp. 25-

41). San Francisco, CA: Jossey-Bass Inc.

Brooks, J. G., & Brooks, M. G. (1999). In search of understanding: The case for

constructivist classrooms. Alexandria, VA: Association for Supervision and

Curriculum Development.

Buchbinder, S. B., & Thompson, J. M. (2010). Career opportunities in health care

management: Perspectives from the field. Sudbury, MA: Jones & Bartlett

Publishers, LLC. Retrieved from

http://samples.jbpub.com/9780763759643/59643_CHFM_i_xviii.pdf 376

Bunce, D. M. (2014). Challenging myths about teaching and learning chemistry. In

Learning with Understanding in the Chemistry Classroom (pp. 63–74). Springer.

Retrieved from http://link.springer.com/chapter/10.1007/978-94-007-4366-3_8

Burke, B. (1998). Evaluating for a change: reflections on participatory methodology. In

E. Whitmore (Ed.), Understanding and practicing participatory evaluation (pp.

43-56). San Francisco, CA: Jossey-Bass Inc.

Cahn, P. S. (2014). In and out of the curriculum: An historical case study in

implementing interprofessional education. Journal of Interprofessional Care,

28(2), 128-133.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs

for research. Boston, MA: Houghton Mifflin Company.

Cardellini, L. (2014). Problem solving through cooperative learning in the chemistry

classroom. In Learning with Understanding in the Chemistry Classroom (pp.

149–163). Springer. Retrieved from http://link.springer.com/chapter/10.1007/978-

94-007-4366-3_8

Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting

paradigms: From Flexner to competencies. Academic Medicine, 77(5), 361-367.

Chilisa, B., & Tsheko, G. N. (2014). Mixed methods in indigenous research: building

relationships for sustainable intervention outcomes. Journal of Mixed Methods

Research, 8(3), 222-233. 377

Chiu, M. M. (2004). Adapting teacher interventions to student needs during cooperative

learning: How to improve student problem solving and time on-task. American

Educational Research Journal, 41(2), 365-399.

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education. New

York, NY: Routledge Taylor & Francis Group.

Connors, S. C., & Magilvy, J. K. (2011). Assessing vital signs: Applying two

participatory evaluation frameworks to the evaluation of a college of nursing.

Evaluation and Program Planning, 34, 79-86.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimental: Design and analysis issues

for field settings. Boston, MA: Houghton Mifflin Company.

Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore

(Ed.), Understanding and practicing participatory evaluation (pp. 5-23). San

Francisco, CA: Jossey-Bass Inc.

Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods

approaches. Thousand Oaks, CA: SAGE Publications, Inc.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods

research. Thousand Oaks, CA: SAGE Publications, Inc.

Curran, V. R., Sharpe, D., Forristall, J., & Flynn, K. (2008). Attitudes of health sciences

students towards interprofessional teamwork and education. Learning in Health

and Social Care, 7(3), 146-156.

Curry, L. (1990). A critique of the research on learning styles. Retrieved from:

http://www.ascd.org/ASCD/pdf/journals/ed_lead/el_199010_curry.pdf 378

Curtis, M. J., & Stollar, S. A. (1996). Applying principles and practices of organizational

change to school reform. School Psychology Review, 25, 409-417.

Cvetkovic, D. (2013). Evaluation of FCS self and peer-assessment approach based on

cooperative and engineering design learning. In Engineering in Medicine and

Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE

(pp. 2519–2522). IEEE. Retrieved from

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6610052

Dahllöf, U.S. (1971). Ability grouping, content validity, and curriculum process analysis.

New York, NY: Teachers College Press

Daigneault, P.-M., & Jacob, S. (2009). Toward accurate measurement of participation:

Rethinking the conceptualization and operationalization of participatory

evaluation. American Journal of Evaluation, 30(3), 330-348.

Daigneault, P.-M., Jacob, S., & Tremblay, J. (2012). Measuring stakeholder participation:

An empirical validation of the participatory evaluation measurement instrument.

Evaluation Review, 36(4), 243-270.

Davis, K., Schoen, C., & Stremikis, K. (2010). Mirror, mirror on the wall: How the

performance of the U.S. health care system compares internationally.

Commonwealth Fund Publication Number 1400

De Jaegher, H., & Di Paolo, E. (2007). Participatory sense-making: An enactive approach

to social cognition [Electronic version]. Phenomenology Cognitive Science, 6,

485-507. Retrieved from https://hannedejaegher.wordpress.com/ 379

De Vries, M. J. (2005). Teaching about technology: An introduction to the philosophy of

technology for non-philosophers. The Netherlands: Springer.

Devetak, I., & Glazar, S. A. (2014). Approaches in chemistry teaching for learning with

understanding cooperative and collaborative learning. In Learning with

Understanding in the Chemistry Classroom (pp. 127–128). Springer. Retrieved

from http://link.springer.com/chapter/10.1007/978-94-007-4366-3_8

DiGiovanni, J. J., & McCarthy, J. W. (in press). IPE 102: Innovative inter-professional

education that includes audiology and speech-language pathology. In A. Johnson

(Ed.), Inter-professional education and inter-professional practice in

communication sciences and disorders: An introduction and case-based examples

of implementation in education and health care settings (pp. 29–55). Rockville,

MD: American Speech-Language-Hearing Association.

Director, S. M. (1974). Underadjustment bias in the quasi-experimental evaluation of

manpower training. (Unpublished doctoral dissertation). Northwestern

University, IL.

Doerry, E., & Palmer, J. D. (2011).Improving efficacy of peer-evaluation in team project

scenarios. Retrieved from

http://www.asee.org/file_server/papers/attachment/file/0001/0683/TeamTracking-

FINAL-submitted.pdf

Ducette, J. P., Sewell, T. E., & Shapiro, J. P. (1996). Diversity in education: Problems

and possibilities. In F. B. Murray (Ed.), The teacher educator’s handbook: 380

Building a knowledge base for the preparation of teachers, (pp. 323-380). San

Francisco, CA: Jossey-Bass Publishers.

Duderstadt, J. J. (2007). Engineering for a changing road, a roadmap to the future of

engineering practice, research, and education. Retrieved from

http://deepblue.lib.umich.edu/handle/2027.42/88638

Duncan, O. D. (1969). Contingencies in constructing causal models. In E. F. Borgatta, &

G. W. Bohrnstedt (Eds.), Sociological methodology (pp. 74-112). York, PA:

Jossey-Bass, Inc.

Dunn, R., Beaudry, J., & Klavas, A. (1989). Survey of research on learning styles.

Educational Leadership, March, 50-58.

Elliot, A. J., & Thrash, T. M. (2001). Achievement goals and the hierarchical model of

achievement motivation. Educational Psychology Review, 13(2), 134-156.

Ende, J., Kelley, M., & Sox, H. (1997). The federated council of internal medicine’s

resource guide for residency education: An instrument for curricular change.

Annals of Internal Medicine, 127(6), 454–457.

Epstein, R. M., & Hundert, E. M. (2002). Defining and assessing professional

competence. Journal of American Medical Association, 287(2), 226-235.

Felder, R. M., & Brent, R. (2000). Active and cooperative learning. Retrieved from

http://www.personal.psu.edu/ryt1/blogs/totos_tidbits/Felder.pdf

Felder, R. M., & Brent, R. (2003). Designing and teaching courses to satisfy the ABET

engineering criteria. Journal of Engineering Education, 92(1), 7–25. 381

Felder, R. M., & Brent, R. (2004). The ABC’s of engineering education: ABET, Bloom’s

taxonomy, cooperative learning, and so on. In Proceedings of the 2004 American

Society for Engineering Education Annual Conference & Exposition (p. 1).

Retrieved from

http://aucache.autodesk.com/au2011/sessions/5091/additional_materials/v2_ED50

91_Miller_AdditionalMaterials.pdf

Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of

Engineering Education, 94(1), 57–72.

Felder, R. M., & Brent, R. (2007). Cooperative learning. In Active learning: Models from

the analytical sciences, ACS Symposium Series (Vol. 970, pp. 34–53). Retrieved

from

http://kenanaonline.com/files/0030/30183/%D8%AA%D8%B9%D8%A7%D9%8

8%D9%86%D9%8A.pdf

Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). Thousand

Oaks, CA: SAGE Publications Inc.

Fitz-Gibbon, C. T. (1996). Monitoring education: Indicators, quality, and effectiveness.

New York, NY: The University of Chicago Library.

Ford, E. M. (1992). Motivating humans: Goals, emotions, and personal agency beliefs.

Newbury Park, CA: SAGE Publications, Inc.

Frenk, J., Chen, L., Bhutta, Z. A., Cohen, J., Crisp, N., Evans, T., … Zurayk, H. (2010).

Health professionals for a new century: Transforming education to strengthen 382

health systems in an interdependent world. The Lancet 376 (9756): 1923-1958.

Doi:10.1016/S0140-6736(10)61854-5.

Gagn푒́, R. M., & Briggs, L. J. (1974). Principles of instructional design. New York, NY:

Holt, Rinehart and Winston.

Ginns, I. S., Heirdsfield, A., Atweh, B., & Watters, J. J. (2001). Beginning teachers

becoming professionals through action research. Educational Action Research

Journal, 9(1),109-131.

Gloe, D. (1998). Quality management: A staff development tradition. In K. J. Kelly-

Thomas (Ed.), Clinical and nursing staff development: Current competence,

future focus (pp.301-336). Philadelphia, PA: Lippincott-Raven Publishers.

Good, T. L., Biddle, B. J., & Brophy, J. E. (1975). Teachers make a difference. New

York, NY: Holt, Rinehart and Winston.

Gregorc, A. F., & Butler, K. A. (1984). Learning is a matter of style. Vocational

Education, 23, 27-29.

Greiner, A. C., & Knebel, E. (2003). Health professions education: A bridge to quality.

Committee on the Health Professions Education Summit, ISBN: 0-309-51678-1,

192 pages. Retrieved from http://www.nap.edu/catalog/10681.html

Gropper, G. L. (1968). Programing visual presentations for procedural learning. AV

Communication Review, 16(1), 33-56.

Gropper, G. L. (1974). Instructional strategies. Englewood Cliffs, NJ: Educational

Technology Publications. 383

Gropper, G. L. (1975). Diagnosis and revision in the development of instructional

materials. Englewood Cliffs, NJ: Educational Technology Publications.

Gropper, G. L. (1976). What should a theory of instruction concern itself with?

Educational Technology, 16 (10), 7-12.

Gropper, G. L. (1983a). A metatheory of instruction: A framework for analyzing and

evaluating instructional theories and models. In C. M. Reigeluth (Ed.),

Instructional-design theories and models: An overview of their current status (pp.

37-53). Hillsdale, NJ: Lawrence Erlbaum Associates.

Gropper, G. L. (1983b). A behavioral approach to instructional prescription. In C. M.

Reigeluth (Ed.), Instructional-design theories and models: An overview of their

current status (pp. 101-161). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hall, P., & Weaver, L. (2001). Interdisciplinary education and teamwork: A long and

winding road. Medical Education, 35(9), 867–875. doi:10.1046/j.1365-

2923.2001.00919.x

Haas, M. S. (2002). The influence of teaching methods on student achievement on

Virginia’s end of course test for algebra I (Unpublished

doctoral dissertation). Virginia Polytechnic Institute and State University, VA.

Hattie, J. A. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to

achievement. New York, NY: Routledge.

Hedges, H., & Cullen, J. (2012). Participatory learning theories: A framework for early

childhood pedagogy. Early Child Development and Care, 182(7), 921-940. doi:

10.1080/03004430.2011.597504. 384

Heron, J. (1989). Facilitators’ handbook. New York, NY: Nichols Publishing Company.

Heron, J. (1993). Group facilitation theories and models for practice. East Brunswick,

NJ: Nichols Publishing Company.

House, R. E. (2005). Deliberative democratic evaluation. In S. Mathison (Ed.),

Encyclopedia of evaluation (pp. 104-108). Thousand Oaks, CA: Sage.

Hsu, C-H., & Moore, D. R. (2011). Formative research on the goal-based scenario model

applied to computer delivery and simulation. The Journal of Applied Instructional

Design, 1(1), 13-24.

Hughes, C. (2003). Disseminating qualitative research in educational settings a critical

introduction. UK: Open University Press.

Hughes, G. (2011). Towards a personal best: A case study for introducing ipsative

assessment in higher education: Studies in Higher Education, 36(3), 353-367.

Hundert, E. M., Hafferty, F., & Christakis, D. (1996). Characteristics of the informal

curriculum and trainees’ ethical choices. Academic Medicine, 71(6), 624–42.

IPEC (2011). Core competencies for interprofessional collaborative practice: Report of

an expert panel. Retrieved from https://ipecollaborative.org/uploads/IPEC-Core-

Competencies.pdf

Jacob, S., Ouvrard, L., & Bélanger, J.-F. (2011). Participatory evaluation and process use

within a social aid organization for at-risk families and youth. Evaluation and

Program Planning, 34(2), 113-123. 385

Jenkins, H., Purushotma, R., Weigel, M., Clinton, K., & Robison, A. J. (2009).

Confronting the challenges of participatory culture: Media education for the 21st

century. Cambridge: The MIT Press.

Kelly-Thomas, K. (1998). Clinical and nursing staff development. Philadelphia, PA:

Lippincott.

Kenny, R. F., & Wirth, J. (2009). Implementing participatory, constructivist learning

experiences through best practices in live interactive performance. The Journal of

Effective Teaching, 9(1), 34-47. Retrieved from

http://uncw.edu/cte/et/articles/vol9_1/kenny.pdf.

King, J. A. (1995). Involving practitioners in evaluation studies: How viable is

collaborative evaluation in schools? In J. B. Cousins, & L. Earl (Eds.),

Participatory evaluation in education. studies in evaluation use and

organizational learning. London: The Falmer Press.

Knowles, M. S., Holton, E. F., & Swanson, R. A. (2012). The adult learner: The

definitive classic in adult education and human resources development (7th ed.).

New York, NY: Routledge Taylor & Francis.

Laudon, J. M. D. (2010). Participatory to the end: Planning and implementation of a

participatory evaluation strategy (FES Outstanding Graduate Student Paper

Series). Toronto, Ontario, Canada: York University.

Leach, D. C. (2000). Evaluation of competency: An ACGME perspective. American

Journal of Physical Medicine & Rehabilitation, 79(5), 487–489. 386

Locatis, C. (2007). Performance, instruction, and technology in health care education. In

R. A. Reiser, & J. V. Dempsey (Eds.), Trends and issues in instructional design

and technology (pp. 197-208). New Jersey: Pearson Education, Inc

Ludwigsen, K. R., & Albright, D. G. (1994). Training psychologists for hospital practice:

A proposal. Professional Psychology: Research and Practice, 25(3), 241-246.

Lutze-Mann, L. (2014). Student peer assessment. Retrieved from

https://teaching.unsw.edu.au/peer-assessment.

Maehr, M. (1983). On doing well in science: Why Johnny no longer excels; why Sarah

never did. In S. Paris, G. Olson, & H. Stevenson (Eds.), Learning and motivation

in the classroom (pp. 179-210). Hillsdale, NJ: Lawrence Erlbaum Associates,

Publishers.

Mauch, J. E., & Birch, J. W. (1989). Guide to the successful thesis and dissertation (2nd

ed.). New York, NY: Marcel Dekker.

Mauch, J. E., & Park, N. (2003). Guide to the successful thesis and dissertation: A

handbook for students and faculty (5th ed.). New York, NY: Marcel Dekker.

McCallin, A. M. (2005). Interprofessional practice: Learning how to collaborate.

Contemporary Nurse 20, 28-37.

McDaniel, S. H. (1995). Collaboration between psychologists and family physicians:

Implementing the biopsychosocial model. Professional Psychology: Research and

Practice, 26(2), 117-122.

McFadyen, A. K., Webster, V., Strachan, K., Figgins, E., Brown, H., & McKechnie, J.

(2005). The readiness for interprofessional learning scale: A possible more stable 387

sub-scale model for the original version of RIPLS. Journal of Interprofessional

Care, 19(6), 595-603.

McGillan, R., Tarini, P., & Small, J. (2001). U.S. Health care providers say quality of

care is “unacceptable”. Embargoed for Release: May 8. Retrieved from

http://www.ihi.org/about/news/Documents/IHIPressRelease_USHealthcareQualit

yUnacceptable_May01.pdf

McKenzie, W. (2014). Self-selecting real-world learning communities. Retrieved from

http://www.wholechildeducation.org/blog/self-selecting-real-world-learning-

communities

McNair, R. P. (2005). The case for educating health care students in professionalism as

the core content of interprofessional education. Medical Education, 39(5), 456–

464.

Mead, N., & Bower, P. (2000). Patient-centeredness: A conceptual framework and

review of the empirical literature. Social Science Medicine, 51(7), 1087-1110.

Mercer, N. (1995). The guided construction of knowledge: Talk amongst teachers and

learners. Clevedon: Multilingual Matters Ltd.

Meyers, L. S., Gamst, G., & Guarino, A. J. (2013). Applied multivariate research: Design

and interpretation. Thousand Oaks, CA: Sage Publications.

Moffic, H. S, Brochstein, J., Blattstein, A., & Adams, G. L. (1983). Attitudes in the

provision of public sector health and mental health care. Social Work in Health

Care, 8(4), 17-28. 388

Moore, D. R. (2004). A framework for preparing students to design their own learning

strategies. College Quarterly, 7 (4). Retrieved from

http://www.senecac.on.ca/quarterly/2004-vol07-num04-fall/moore.html.

Morrison, G. R., Ross, S. M., Kemp, J. E., & Kalman, H. K. (2007). Designing effective

instruction (5th ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Murphy, P. (1996). Integrating learning and assessment-the role of learning theories? In

P. Woods (Ed.), Contemporary issues in teaching and learning (pp. 173-193).

New York, NY: Routledge.

Murray, H. (1963). Explorations in personality: A clinical and experimental study of fifty

men of college age. New York, NY: Oxford University Press.

Nastasi, B. K., & Varjas, K. (1998). Participatory model of mental health programming:

Lessons learned from work in a developing country. School Psychology Review,

27(2), 260-276.

Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. (2004). Turning student groups into

effective teams. Journal of Student Centered Learning, 2(1), 9–34.

Okasha, A. (1997). The future of medical education and teaching: A psychiatric

perspective. American Journal of Psychiatry, 154(6), 77-85.

Parsell, G., & Bligh, J. (1999). The development of a questionnaire to assess the

readiness of health care students for interprofessional learning (RIPLS). Medical

Education, 33, 95-100.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2009). Learning styles: Concepts

and evidence. Association for Psychological Science, 9(3), 105-119. 389

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand

Oaks, CA: Sage Publications.

Pedersen, S., & Liu, M. (2003). Teachers’ beliefs about issues in the implementation of a

student-centered learning environment. Educational Technology Research and

Development, 51(2), 57–76.

Pietiläinen, V. (2012). Testing the participatory education evaluation concept in a

national context. Studies in Educational Evaluation, 38, 9-14.

Prideaux, D. (2009). Integrated learning (3rd ed.). In J.A. Dent, & R.M. Harden (Eds.), A

practical guide for medical teachers (pp. 181-186). Edinburgh: Churchill

Livingstone Elsevier.

Reeve, J. (2005). Understanding motivation and emotion (4th ed.). Hoboken, NJ: Wiley.

Rich, S. (2010). Participatory learning overview. Retrieved November 4, 2015, from

https://sararich.wordpress.com/2010/12/20/participatory-learning-overview/.

Robertson, L., Smellie, T., Wilson, P., & Cox, L. (2011). Learning styles and fieldwork

education: Students’ perspectives. New Zealand Journal of Occupational

Therapy, 58(1), 36-40.

Ryan, K., Greene, J., Lincoln, J., Mathison, Y., & Mertens, M. D. (1998). Advantages

and challenges of challenges of using inclusive evaluation approaches in

evaluation practice. American Journal of Evaluation, 19, 101-122.

Saldaňa, J. (2009). The coding manual for qualitative researchers. Los Angeles, CA:

SAGE Publications Ltd. 390

Sargeant, J. (2009). Theories to aid understanding and implementation of

interprofessional education. Journal of Continuing Education in the Health

Professions, 29(3), 178-184.

Sargeant, J., Loney, E., & Murphy, G. (2008). Effective interprofessional teams: “Contact

is not enough” to build a team. Journal of Continuing Education in the Health

Professions, 28(4), 228-234.

Sarka, G., & Chassiakos, Y. R. (2010). Collaboration in ambulatory care: Integrating the

practitioners of medicine. In B. Freshman, L. Rubino, & Y. R. Chassiakos (Eds.),

Collaboration across the disciplines in health care (pp. 299-316). Sudbury, MA:

Jones and Bartlett Publishers.

Schank, R. C., Berman, T. R., & Macpherson, K. A. (1999). Learning by doing. In C. M.

Reigeluth (Ed.), Instructional-design theories and models: Vol. 2, a new

paradigm of instructional theory (pp. 161-181). Mahwah, NJ: Lawrence Erlbaum

Associates.

Schank, R. C., Fano, A., Bell, B., & Jona, M. (1993/1994). The design of goal-based

scenarios. Journal of the Learning Sciences, 3(4), 305-345.

Sefton, A. E. (2009). Problem-based learning (3rd ed.). In J. A. Dent, & R. M. Harden

(Eds.), A practical guide for medical teachers (pp. 174-180). Edinburgh:

Churchill Livingstone Elsevier.

Shank, P. (2013). More on designing and teaching online courses with adult students in

mind. Faculty Focus. Retrieved from 391

http://www.facultyfocus.com/articles/online-education/more-on-designing-and-

teaching-online-courses-with-adult-students-in-mind/

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational

Researcher, 29(7), 4-14.

Sheppard, M. (1992). Contact and collaboration with general practitioners: A comparison

of social workers and community psychiatric nurses. British Journal of Social

Work, 22(4), 419-436.

Shuayb, M. (2014). Appreciative inquiry as a method for participatory change in

secondary schools Lebanon. Journal of Mixed Methods Research, 8(3), 299-307

Silvernail, D. L. (1987). Developing positive student self-concept. Nea Professional

Library-National Education Association.

Slavin, R. E. (1990). Cooperative learning: Theory, research, and practice. Englewood

Cliffs, NJ: Prentice-Hall, Inc.

Smith, S. R. (2009). Outcome-based curriculum (3rd ed.). In J. A. Dent, & R. M. Harden

(Eds.), A practical guide for medical teachers (pp. 161-167). Edinburgh:

Churchill Livingstone Elsevier.

Splan, R. K., Porr, C. A., & Broyles, T. W. (2011). Undergraduate research in

agriculture: Constructivism and the scholarship of discovery. Journal of

Agricultural Education, 52(4), 56–64.

Strijbos, J.-W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender’s

competence level in academic writing revision tasks: Are they critical for

feedback perceptions and efficiency? Learning and Instruction, 20(4), 291–303. 392

Svinicki, M. D. (1991). Practical implications of cognitive theories. In R. J. Menges, &

M. D. Svinicki (Eds.), College teaching: From theory to practice (pp. 27-38). San

Francisco, CA: Jossey-Bass Inc. Publishers.

Tandon, R. (1988). Participatory evaluation: Issues and concerns. New Delhi: Society

for Participatory Research in Asia.

Teddlie, C., & Yu, F. (2007). Mixed methods sampling: a typology with examples.

Journal of Mixed Methods Research, 1 (1), 77-100.

Topping, K. (1998). Peer assessment between students in colleges and universities.

Review of Educational Research, 68(3), 249–276.

Torrance, H. (2012). Triangulation, respondent validation, and democratic participation

in mixed methods research. Journal of Mixed Methods Research, 6(2), 111-123.

Trautwein, U., Lüdtke, O., Köller, O., & Baumert, J. (2006). Self-esteem, academic self-

concept, and achievement: How environment moderates the dynamics of self-

concept. Journal of Personality and Social Psychology, 90, 334-349.

Trilling, B., & Fadel, C. (2009). 21st Century skills: Learning for life in our times. San

Francisco, CA: John Wisely & Sons, Inc.

Warner, R. M. (2013). Applied statistics: From bivariate through multivariate

techniques. Thousand Oaks, CA: Sage Publications.

Weaver, L., & Cousins, J. B. (2004). Unpacking the participatory process. Retrieved

from http://www.alnap.org/resource/13037. 393

Weimer, M. (2014). The relationship between participation and discussion. Faculty

Focus. Retrieved from http://www.facultyfocus.com/articles/teaching-professor-

blog/relationship-participation-discussion/

Wezemael, V. L, Verbeke, W., & Alessandrin, A. (2012). Evaluation of a mixed

participatory method to improve mutual understanding between consumers and

chain actors. Journal of Mixed Methods Research, 7(2), 121-140.

Whitmore, E. (Ed.). (1998). Understanding and practicing participatory evaluation. San

Francisco, CA: Jossey-Bass Inc.

WHO. (1988). Learning together to work together for health: Report of a WHO study

group on multiprofessional education of health personnel: The team approach

[meeting held in Geneva from 12 to 16 October 1987]. Retrieved from

http://apps.who.int/iris/handle/10665/37411

Wilson, B., Teslow, J., & Osman-Jouchoux, R. (1995). The impact of constructivism

(and postmodernism) on ID fundamentals. In B. B. Seels (Ed.), Instructional

design fundamentals: A reconsideration (pp. 137-157). Englewood Cliffs, NJ:

Educational Technology Publications.

394

Appendix A. Practice Participatory Instruction

Planning

Planning was done in Fall 2012. The team held several meetings about a) how to get started, b) what to include (the 5 Modules on each of the IOM 5 core competence skills), c) how to package the course content (Blackboard), d) the doable syllabus, e) resources needed, and f) advertisement of the course

Implementation

Part 1. Cohort 1: Spring 2013 semester a) Duration was 16 weeks. b) Week1. There was orientation that included course introduction, iPad distribution, self-introduction, team formation, pre-survey administration, and discussion on weekly journal writing. c) Week 2. The profession representation and journal d) Week 3. Activities on patient-centered care and journal e) Week 4. Activities on interdisciplinary teams f) Week 5. Activities on evidence-based practice and journal g) Week 6. Activities on quality improvement and journal h) Weeks 7-9. Activities on informatics and journal i) Week 10. Activities on case-based project and journal j) Weeks 11-14. Preparation for a Final project k) Week 15. Presentation of Final project l) Week 16. Evaluations, final journal entry, and Post-survey 395

Part 2. Role(s) of the instructor. Instructor was a facilitator/coach/ mentor

Part 3. Role(s) of students. Students were cooperative, autonomous (control of their work), and co-researcher.

Part 4. Role(s) of student mentor. Student mentor was a mentor.

Part 5. Study context- Interdisciplinary Health Education

Part 6. Materials: a) Technology. The technology used in this study included,

Laptop, iPad, and apps (such as LinkedIn, Twitter, Facebook, Aurasma, Asana, Dropbox,

Podio, Flow, Explain Everything, Google Docs, Google Hangout); b) Health care systems; c) Clinical cases (Cranial Nerves, Antibiotic conflicts, patient provider communication, Dod bite, SimUcase, Medical controversy, Health infographic, patient safety, HIPAA, Burn victim, Diabates, and TIA SOAP note; d) Interdisciplinary

Healthcare Professional teams (Expert opinion, Digital footprint, Health professional’s job; and e) Practice participatory evaluation learning

Part 7. Activities: a)group discussion- in class and out-of-class; b) group meeting; c) weekly assignments on the skills; d) weekly journal writing on reflection on activity processes; e) individual and peer feedbacks on assignments/checklists/ guidelines; and f) sharing experiences on the tasks

Part 8. Final group module project

Part 9. Guidelines: a) applying the IOM 5 core competence skills in the final module project; b) task activities included planning (topics, objectives, resources, methods, and time schedules), structuring/operating (learning environment, describing the exercise, practice, 396 feedback, and modeling skills), and meaning (knowledge and understanding). Process activities included confronting (group behavior, lack of knowledge and competence), feeling (emotion both positive and negative, group dynamic, interaction), and valuing

(creating a climate of respect for persons and personal autonomy, disclosing their true needs and interests, finding their integrity, determining their own reality, and humanity)

(Heron, 1989). Other processes included encouraging and managing stress; sharing in group settings; examining professional activities, needs, and commitments; assuming responsibility for articulating personal ideas, and resources; and monitoring experiences, processes, changes, and impacts (Patton, 2002). c) Checklists; and d) Evaluation. Changes included skills, attitudes, feelings, behaviors, and knowledge

(Patton, 2002).

Part 10. Products: Products students created included a) apps-based project

(Interactive tutorials), b) clinical case-based project, and c) group module project.

Part 11. Packaging: Products were developed, created and designed; and packaged as a) PowerPoint, b) iBook, c) Website, and d) Prezi. 397

Appendix B. Syllabus for Participatory Learning Instruction

Syllabus

Interprofessional Healthcare

HSP 5510 #11732

HSP 4510 #11731

Spring, 2014

Tuesdays, 5:15-7:15pm

Instructors

Course Description:

This is a course to enhance Inter-professional Health Science education. Instruction in this course is aligned with the Institute of Medicine’s key core competencies a) Delivering patient-centered care, b) Working as part of interdisciplinary teams, c) Practicing evidence-based medicine, d) Focusing on quality improvement and e) Using information technology.

This course is funded through MedTAPP (Medicaid Technical Assistance and Policy

Program). MedTAPP is a university Medicaid research partnership combining nonfederal and federal funds to support the efficient and effective administration of the

Medicaid program. The Healthcare Access Initiative is the specific partnership mechanism for this course. It supports the development and retention of healthcare practitioners to serve Ohio’s Medicaid population using emerging healthcare delivery 398 models and evidence-based practices. The MedTAPP HCA was designed to align with established, successful programs and leverage existing resources to train and retain healthcare practitioners to serve Medicaid beneficiaries in the following areas: Child and

Adolescent Psychiatry, Community Psychiatry with a Geriatric and/or Integrated

Behavioral Health/ Primary Care Focus, Pediatrics, Family Practice, Advanced Practice

Nursing, and Dentistry.

More specific information on the HCA can be found at: http://grc.osu.edu/medicaidpartnerships/healthcareaccess/index.cfm

HCA was designed to provide additional funds to existing projects. The existing project related to this course was on using mobile technologies to build Inter-professional teams. Consequently the emphasis on this course will be in the exploration of electronic and mobile technologies to achieve that end. The underlying philosophy of the course is that your understanding of each other’s professions will be enhanced and fostered through building educational and informational materials related to your professions.

Your learning experiences will take the form of individual assignments, group assignments, experiments with different apps, completion of case based content, and development of new content.

In undertaking your assignments: You may design materials for children or adults.

You may gather information from individuals on or off campus. You may reflect on your experiences with different case based materials and get feedback from other students and professionals. 399

This is a different course from others you may have taken in the past. Since it is combined with compensation in the form of a fellowship, there is a certain amount of regular work expected from you. Further there may be changes to assignments depending on student feedback, changing technologies, and campus opportunities.

Team work

Three (3) teams representing disciplines including Social Work, Nursing,

Nutrition/Dietetics, Speech-Language Pathology, Physical Therapy, Medicine, Music

Therapy

Your grade will be based on:

~Timely completion of individual and group assignments. Points are deducted for late or incomplete work.

~Peer evaluations

~Journals

~Your final projects: There are 3 tracks: Inter-professional experiences, Apps and Case

Based.

***ALL ASSIGNMENTS MUST BE COMPLETED FOR A GRADE***

Journals:

A critical part of this experience is reflection. You are required to complete a reflection each week (approximately 1 page per week description). You may use any form you choose (App, document, video, audio), but the instructors need access to your journal at the mid-point and end of class. Each student must have 14 entries.

Mid-point journals due: End of week 7 400

Final journals due: Tuesday of finals week

Assignments will relate to the topic each week. Some will be completed in class and some out of class. Examples of assignments include survey construction and implementation.

Final Project Expectations

You will be designing and creating interactive online instructional modules aligned with the Institute of Medicine’s key core competencies. Each completed module must:

Choose an Audience

Choose a Focus: Case Based, App-Based, Interview-Based, Patient/Constituency

Based

Follow provided template

Goal

Mission

Role learners will be taking on

Evaluate case for decision points

What information resources are necessary?

How will learner justify their decisions?

Why is each learner’s expertise necessary?

How will they discuss?

a live meeting

an asynchronous online meeting 401

a synchronous online meeting

Provide Feedback/Interaction

Corrective feedback and elaborative feedback

Expert’s analysis

Interview experts

Collect comments

Codify comments

Summarize problem and transition to next decision point

Tentative Timetable/ Schedule:

The schedule is subject to change and assignments might be added, omitted or changed throughout the course of the class.

Date Topic In Class Activities Tasks for next class

Week 1 Introductions ~Course Introduction ~Assignment 1

~Pre-Assessment form

Self-Introduction Presentation of individual ~Group work on Self-

introductions via iPad Introduction edits

Sample Final Project (Past) ~Review iPad apps

Week 2 The professions Representing your ~Assignment 2

profession through Media ~Assignment 3

(Digital Footprints, ~Assignment 4

LinkedIn, Twitter,

Facebook, What I actually 402

do)

Week 3 Patient Centered ~Patient provider (Cranial ~Assignment 5

Care nerves, Antibiotic Conflict) ~Assignment 6

Communication ~Assignment 7

~Mobile Apps to inform

Patients (Aurasma,

PowerPoint)

Week 4 Work in ~Collaborative problem ~Assignment 8

Interdisciplinary Solving (Dog bite victim, ~Assignment 9

Teams Expert Opinion 1), ~Assignment 10

~Online collaborative

tools, online planning tools

(Google Docs, Hangout)

Week 5 Employ Evidence Problem Solving ~Assignment 11

Based Practice EBP searching ~Assignment 12

Understanding the ~Assignment 13

literature of other

professions: Controversial

topics (Medical

Controversy)

When apps do a health 403

professional’s job

Week 6 Quality Regulations, online ~Assignment 14

Improvement trainings, HIPAA, ~Assignment 15

universal precautions ~Assignment 16

(Patient Safety,

Infographics), Expert

opinion 2, Alternative

communication

Week 7 Utilize Informatics Case Based Work. Guest ~Assignment 17

speaker (Noah Trembly or ~Assignment 18

Deb Orr). Explain ~Assignment 19

Everything apps, ~Mid-point Journals

Healthcare informatics, (due Friday)

Photos, Professional

interview part1 & 2

Week 8 SPRING BREAK

Week 9 Utilize Informatics ~Explain Everything ~Assignment 20

Presentations ~Assignment 21

~Assignment development ~Assignment 22

for different apps

Week 10 Case Based Work ~Inter-professional ~Assignment 23

Experiences 3 (Burn 404

victim, SimuCase,

Diabetes, TBI)

Developing content

~Introduction of case

Week 11 Working on Final Sample Past Final Projects ~Check points will

Week 12 Project. Evaluation, Checklist for need to be completed

Week 13 Track Choice: Evaluation (a guide will be

App, Inter- given).

professional

experience or Case

Based focus.

Week 14 Finishing up Final Check points, Presentation ~Assignment 24

Project guidelines, Presentation ~Assignment 25

Flyer ~Assignment 26

Week 15 Final Presentations ~Course reflection ~Evaluations

~Wrapping up ~Final Module

~Post Assessment

Week 16 ~All assignments, Final Module, Final Journals, Evaluations, Final

PowerPoint due by 5pm on April 29th.

~iPad must be turned in by Wednesday April 30th

Additionally: 405

Possible guess speakers joining us during the semester: Deb Orr and Noah Trembly

Resources:

1. iPad, and apps (Lecture Capture, Dropbox, Camera , and Photos)

2. Blackboard

3. Clinical tools

406

Appendix C. Syllabus for Direct Instruction:

Syllabus

Inter-Professional Healthcare

HSP 5510 #9203

HSP 4510 #9202

Fall, 2014

Tuesdays, 5:15-7:15pm

Course Description:

This is a course to enhance Inter-professional Health Science education.

Instruction in this course is aligned with the Institute of Medicine’s key core competencies a) Delivering patient-centered care, b) Working as part of interdisciplinary teams, c) Practicing evidence-based medicine, d) Focusing on quality improvement and e) Using information technology.

This course is funded through MedTAPP (Medicaid Technical Assistance and

Policy Program). MedTAPP is a university Medicaid research partnership combining nonfederal and federal funds to support the efficient and effective administration of the

Medicaid program. The Healthcare Access Initiative is the specific partnership mechanism for this course. It supports the development and retention of healthcare practitioners to serve Ohio’s Medicaid population using emerging healthcare delivery 407 models and evidence-based practices. The MedTAPP HCA was designed to align with established, successful programs and leverage existing resources to train and retain healthcare practitioners to serve Medicaid beneficiaries in the following areas: Child and

Adolescent Psychiatry, Community Psychiatry with a Geriatric and/or Integrated

Behavioral Health/ Primary Care Focus, Pediatrics, Family Practice, Advanced Practice

Nursing, and Dentistry.

More specific information on the HCA can be found at: http://grc.osu.edu/medicaidpartnerships/healthcareaccess/index.cfm

HCA was designed to provide additional funds to existing projects. The existing project related to this course was on using mobile technologies to build Inter-professional teams.

Consequently the emphasis on this course will be in the exploration of electronic and mobile technologies to achieve that end. The underlying philosophy of the course is that your understanding of each other’s professions will be enhanced and fostered through building educational and informational materials related to your professions. Your learning experiences will take the form of individual assignments, group assignments, experiments with different apps, completion of case based content, and development of new content.

This course is also about teaching Core Competencies. In an effort to balance direct instruction techniques with mobile technology instruction, the first two weeks of the course will center on the IOM Core Competencies: http://www.iom.edu/Reports/2003/health-professions-education-a-bridge-to-quality.aspx 408

In undertaking your assignments: You may design materials for children or adults. You may gather information from individuals on or off campus. You may reflect on your experiences with different case based materials and get feedback from other students and professionals.

This is a different course from others you may have taken in the past. Since it is combined with compensation in the form of a fellowship, there is a certain amount of regular work expected from you. Further there may be changes to assignments depending on student feedback, changing technologies, and campus opportunities.

Team work. Teams representing disciplines including Social Work, Nursing,

Nutrition/Dietetics, Speech-Language Pathology, Physical Therapy, Medicine, Music

Therapy, Audiology

Your grade will be based on:

~Timely completion of individual and group assignments. Deductions for late or incomplete work.

~Peer evaluations

~Journals

~Your final projects: There are 3 tracks: Inter-professional experiences, Apps and Case

Based.

***ALL ASSIGNMENTS MUST BE COMPLETED FOR A GRADE***

Journals: 409

A critical part of this experience is reflection. You are required to complete a reflection each week (approximately 1 page per week description). Please use Blackboard for this purpose for weeks one and two.

After weeks one and two, you may use any form you choose (App, document, video, audio), but the instructors need access to your journal at the mid-point and end of class.

Each student must have 14 entries.

Mid-point journals due: End of week 7

Final journals due: Tuesday of finals week

Assignments will relate to the topic each week. Some will be completed in class and some out of class. Examples of assignments include survey construction and implementation,

Final Project Expectations:

You will be designing and creating interactive online instructional modules aligned with the Institute of Medicine’s key core competencies. Each completed module must:

Choose an Audience

Choose a Focus: Case Based, App-Based, Interview-Based, Patient/Constituency

Based

Follow provided template

Goal

Mission

Role learners will be taking on

Evaluate case for decision points 410

What information resources are necessary?

How will learner justify their decisions?

Why is each learner’s expertise necessary?

How will they discuss?

a live meeting

an asynchronous online meeting

a synchronous online meeting

Provide Feedback/Interaction

Corrective feedback and elaborative feedback

Expert’s analysis

Interview experts

Collect comments

Codify comments

Summarize problem and transition to next decision point

Tentative Timetable/ Schedule:

The schedule is subject to change and assignments might be added, omitted or changed throughout the course of the class.

Date Topic In Class Activities Tasks for next class

Week 1 IOM Core ~Course Introduction Journal

Competencies ~Pre-Assessment form

Overview 411

Week 2 IOM Core In class discussion Journal

Competencies by ~Post Assessment Self Introduction

Profession.

Week 3 Self-Introduction Presentation of individual ~Group work on

introductions via iPad Self-Introduction

edits

~Review iPad apps

Journal

Week 4 The professions Representing your profession Out of Class

through Media assignment(s)

Journal

Week 5 Patient Centered ~Patient provider Out of Class

Care Communication assignment(s)

~Mobile Apps to inform Journal

patients

Week 6 Work in ~Collaborative problem Out of Class

Interdisciplinary solving assignment(s)

Teams ~Online collaborative tools, Journal

online planning tools

Week 7 Employ Evidence EBP searching Out of Class

Based Practice Understanding the assignment(s)

literature of other Journal 412

professions: Controversial ~Mid-point Journals

topics (due Friday)

Week 8 Quality Regulations, online Out of Class

Improvement trainings, HIPAA, assignment(s)

universal precautions Journal

Week 9 Utilize Informatics ~Explain Everything Out of Class

Presentations assignment(s)

~Assignment development Journal

for different apps

Week 10 Case Based Work ~Interprofessional ~Assignment 23

Experiences 3

~Introduction of case

Week 11 Working on Final NOTE: No Class meeting on ~Check points will

Week 12 Project. November 11. Discussion of need to be

Week 13 virtual formats and virtual completed (a guide

Track Choice: App, meetings for this period. will be given).

Interprofessional

experience or Case

Based focus.

Week 14 Finishing up Final ~Assignment 24

Project ~Assignment 25

~Assignment 26 413

Week 15 Final Presentations ~Course reflection ~Evaluations

~Wrapping up ~Final Module

Week 16 ~All assignments, Final Module, Final Journals, Evaluations, Final

PowerPoint due by 5pm on December 9

~iPad must be turned in by Wednesday December 10

Additionally:

Possible guess speakers joining us during the semester: Deb Orr and Noah Trembly

Resources:

1. iPad, and apps (Lecture Capture, Dropbox, Camera , and Photos)

2. Blackboard

3. Clinical tools

414

Appendix D. IOM Self-Reported Knowledge Achievement (IOMSKA) Survey

This survey is to find out what you have learned about inter-professional education

Required

Full Name

Demographic Information

Age on your last birthday

Sex

Male

Female

Status

Undergraduate

Graduate

Major

What mobile device(s) do you use on a regular basis?

What do you use the mobile device for (personal, professional)?

Please indicate for each device whether it's personal, professional, or both

What apps do you use frequently?

Team based projects

Which do you prefer?

Working in teams

Working alone

What problems did you encounter when working in a team in this class? 415

What problems did you encounter working alone in this class?

What benefits did you encounter working in a team in this class?

What benefits did you encounter working alone in this class?

Post perceptions of other disciplines

Given each discipline what do you think of... (For example, what words, phrases, or descriptions come to mind). Be as honest as you can.

Social Work

Nursing

Dietician

Physical Therapy

Speech-Language Pathology

Medicine

Ability to locate health information

How would you rate yourself on the ability to locate health information via mobile devices?

Based off of what you know now name some apps you would use to locate health information.

Given a diagnosis of "alternating hemiplegia of childhood", outline a search procedure to find out information about it.

Self-reported perceptions about the following:

Ratings and comments related to what you know about each.

Patient-centered care 416

1 2 3 4 5 6 7

No Knowledge

Select a value from a range of 1,No Expert Knowledge, to 7,Expert,.

Comments

Interdisciplinary teamwork

1 2 3 4 5 6 7

No Knowledge

Select a value from a range of 1,No Expert Knowledge, to 7,Expert,.

Comments

EBP (evidence based practice)

1 2 3 4 5 6 7

No Knowledge

Select a value from a range of 1,No Expert Knowledge, to 7,Expert,.

Comments

Quality improvement

1 2 3 4 5 6 7

417

No Knowledge

Select a value from a range of 1,No Expert Knowledge, to 7,Expert,.

Comments

Informatics

1 2 3 4 5 6 7

No Knowledge

Select a value from a range of 1,No Expert

Knowledge, to 7,Expert,.

Comments

Comments about this class

Submit

418

Appendix E. Permission to use MedTAPP Database

419

Appendix F. IRB Approval Letter

420

Appendix G. Pilot Study

Quantitative Data Analyses

Question 1

IOM Self-Reported Knowledge Achievement Survey instrument: Reliability Statistics

(Cronbach alpha reliability)

Table 55

Reliability Coefficients of IOM Knowledge Achievement Survey Instrument before and after Instruction

Item Pre Post

Rp .54 .72

Rtw .67 .71

Re .63 .75

Rq .60 .73

Rinf .67 .74

Overall .68 .77

Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

421

Table 56

Means Standard Deviations of Students’ IOM Knowledge Achievement Scores before and after Participatory and Direct Instructions by Standards

Prior Knowledge Post Knowledge Achievement

Achievement

Standard Participatory (82) Direct (37)

Prior-Ach(SD) Post-Ach (SD) Prior-Ach(SD) Post-Ach (SD)

Rp 4.46 (1.269) 5.63 (0.809) 5.24 (0.863) 5.49 (0.768)

Rtw 4.12 (1.011) 5.70 (0.856) 4.68 (0.884) 5.05 (0.743)

Re 4.80 (1.211) 5.67 (0.903) 5.22 (0.917) 5.43 (0.801)

Rq 3.27 (1.449) 4.88 (0.986) 3.57 (1.482) 4.73 (1.045)

Rinf 1.83 (1.040) 4.74 (1.075) 1.73 (0.962) 4.16 (1.344)

Overall 3.70 (0.819) 5.32 (0.655) 4.09 (0.608) 4.97 (0.720)

Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

422

Paired Sample t-Test

Questions 2 & 3 and Null hypothesis 1 & 2

Table 57

Means Standard Deviations of Students’ IOM Knowledge Achievement Scores before and after Participatory and Direct Instructions by Standards

Prior Knowledge Post Knowledge Achievement

Achievement

Standard Participatory (82) Direct (37)

Prior-Ach(SD) Post-Ach (SD) Prior-Ach(SD) Post-Ach (SD)

Rp 4.46 (1.269) 5.63 (0.809) 5.24 (0.863) 5.49 (0.768)

Rtw 4.12 (1.011) 5.70 (0.856) 4.68 (0.884) 5.05 (0.743)

Re 4.80 (1.211) 5.67 (0.903) 5.22 (0.917) 5.43 (0.801)

Rq 3.27 (1.449) 4.88 (0.986) 3.57 (1.482) 4.73 (1.045)

Rinf 1.83 (1.040) 4.74 (1.075) 1.73 (0.962) 4.16 (1.344)

Overall 3.70 (0.819) 5.32 (0.655) 4.09 (0.608) 4.97 (0.720)

Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

423

Table 58

Descriptive and Inferential Statistics of Overall Students’ Mean Knowledge

Achievements Scores Before and After Instruction

Method Pre Post Change Gain t-test df p d

Participatory 3.70 5.32 1.63 0.744 15.27 81 .000 1.69

(n=82) (0.82) (0.66) (0.96)

Direct 4.09 4.97 .89 10.11 36 .000 1.10

(n=37) (0.61) (0.72) (0.81)

424

Table 59

Descriptive and Inferential Statistics of Students’ Mean Knowledge Achievements Scores

Before and After Instruction, by Patient-centered Care

Patient-centered care

Method Pre Post Change Gain t-test df p d

Part 4.46 5.63 1.17 0.93 8.51 81 .000 0.94

(n=82) (1.27) (0.81) (1.25)

Direct 5.24 5.49 0.24 1.65 36 .107 0.27

(n=37) (0.86) (0.77) (0.90)

425

Table 60

Descriptive and Inferential Statistics of Students’ Mean Knowledge Achievements Scores

Before and After Instruction, by Interdisciplinary Teams

Interdisciplinary Teams

Method Pre Post Change Gain t-test df p d

Part 4.12 5.70 1.57 1.19 10.76 81 .000 1.19

(n=82) (1.01) (0.86) (1.32)

Direct 4.68 5.05 0.38 2.02 36 .051 0.33

(n=37) (0.88) (0.74) (1.14)

426

Table 61

Descriptive and Inferential Statistics of Students’ Mean Knowledge Achievements Scores

Before and After Instruction, by Evidence-Based Practice

Evidence-Based Practice

Method Pre Post Change Gain t-test df p-value d

Part 4.80 5.67 0.87 0.65 5.81 81 .000 0.644

(n=82) (1.21) (0.90) (1.35)

Direct 5.22 5.43 0.22 1.48 36 .146 0.247

(n=37) (0.92) (0.80) (0.89)

427

Table 62

Descriptive and Inferential Statistics of Students’ Mean Knowledge Achievements Scores

Before and After Instruction, by Quality Improvement

Quality Improvement

Method Pre Post Change Gain t-test df p d

Part 3.27 4.88 1.61 0.45 9.29 81 .000 1.03

(n=82) (1.45) (1.00) (1.57)

Direct 3.57 4.73 1.16 4.45 36 .000 0.73

(n=37) (1.48) (1.05) (1.59)

Table 63

Descriptive and Inferential Statistics of Students’ Mean Knowledge Achievements Scores

Before and After Instruction, by Informatics

Informatics

Method Pre Post Change Gain t-test df p d

Part 1.83 4.74 2.91 0.48 18.53 81 .000 2.04

(n=82) (1.04) (1.08) (1.43)

Direct 1.73 4.16 2.43 10.11 36 .000 1.66

(n=37) (0.96) (1.34) (1.46)

428

Independent samples t-tests

Question4 and Null hypotheses 3

Table 64

Descriptive and Inferential Statistics of Overall Students’ Mean Knowledge

Achievements Scores before Participatory Instruction and Direct Instruction

Standard Method Prior Levene’s p t-test df p d Ach test Rp Part 4.46 9.97 .002 -3.91 98.7 .000 -1.56 (1.27) Direct 5.24 (0.83) Rtw Part 4.12 0.05 .82 -2.87 117 .005 -1.12 (1.01) Direct 4.68 (0.88) Re Part 4.80 3.5 .06 -1.84 117 .068 -0.84 (1.21) Direct 5.22 (0.92) Rq Part 3.27 0.003 .95 -1.04 117 .303 -0.60 (1.45) Direct 3.57 (1.48) Rinf Part 1.83 0.54 .46 0.49 117 .622 0.20 (1.04) Direct 1.73 (0.96) Overall Part 3.70 3.98 .05 -2.88 91.7 .005 -0.78 (0.82) Direct 4.09 (0.61) Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

429

Question4 and Null hypotheses 4

Table 65

Descriptive and Inferential Statistics of Overall Students’ Mean Knowledge

Achievements Scores after Participatory Instruction and Direct Instruction

Stand Meth fAch Levene p t-test df p d

Rp Part 5.63 (0.81) 0.03 .88 0.94 117 .35 0.28

Direct 5.49 (0.77)

Rtw Part 5.70 (0.86) 5.03 .03 4.15 79.3 .000 1.30

Direct 5.05 (0.74)

Re Part 5.67 (0.90) 0.04 .85 1.38 117 .17 0.48

Direct 5.43 (0.80)

Rq Part 4.88 (0.99) 0.08 .78 0.75 117 .46 0.30

Direct 4.73 (1.05)

Rinf Part 4.74 (1.08) 2.32 .13 2.52 117 .01 1.16

Direct 4.16 (1.34)

Overall Part 5.32 (0.66) 0.37 .55 2.63 117 .01 0.70

Direct 4.97 (0.72)

Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

430

Question 5

Table 66

Means, Standard Deviations, and Inter-correlations for the relationship between

Students’ Knowledge Component and the Overall Students’ Knowledge Achievements before Participatory Instruction

Item Participatory(82) 1 2 3 4 5

Rp 4.46 (1.269) -

Rtw 4.12 (1.011) .379*** -

Re 4.80 (1.211) .574*** .231 -

Rq 3.27 (1.449) .469*** .213 .368** -

Rinf 1.83 (1.040) .248* .114 .061 .498*** -

Overall 3.698 (0.819) .802*** .537*** .676*** .787*** .553***

Note. Corrected alpha=.05/12= .004; *p < .05; **p < .01; ***p < .001; Rp = patient- centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

431

Table 67

Means, Standard Deviations, and Inter-correlations for the relationship between

Students’ Knowledge Component and the Overall Students’ Knowledge Achievements after Participatory Instruction

Item Participatory(82) 1 2 3 4 5

Rp 5.63 (0.81) -

Rtw 5.70 (0.86) .41*** -

Re 5.67 (0.90) .42*** .34**** -

Rq 4.88 (0.99) .41*** .39*** .29*** -

Rinf 4.74 (1.08) .42*** .41*** .24* .42*** -

Overall 5.32 (0.66) .73*** .71*** .64*** .72*** .73***

Corrected alpha=.05/15= .003; *p < .05; **p < .01; ***p < .001; Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

432

Question 6

Table 68

Means, Standard Deviations, and Inter-correlations for the relationship between

Students’ Knowledge Component and the Overall Students’ Knowledge Achievements before Direct Instruction

Item Direct (37) 1 2 3 4 5

Rp 5.24 (0.86) -

Rtw 4.68 (0.88) .398* -

Re 5.22 (0.92) .564** .226 -

Rq 3.57 (1.48) .150 -.068 -.01* -

Rinf 1.73 (0.96) .148 .090 -.09 .52 -

Overall 4.09 (0.61) .689*** .467** .493** .671*** .611***

Corrected alpha=.05/8= .006; *p < .05; **p < .01; ***p < .001; Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

433

Table 69

Means, Standard Deviations, and Inter-correlations for the relationship between

Students’ Knowledge Component and the Overall Students’ Knowledge Achievements after Direct Instruction

Item Direct (37) 1 2 3 4 5

Rp 5.49 (0.77) -

Rtw 5.05 (0.74) .63*** -

Re 5.43 (0.80) .55*** .57*** -

Rq 4.73 (1.05) .480** .66*** .343* -

Rinf 4.16 (1.34) .38* .44** .398* .47** -

Overall 4.97 (0.720) .75*** .82*** .71*** .78*** .77***

Corrected alpha=.05/14= .004; *p < .05; **p < .01; ***p < .001; Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics

434

Comparing the Mean Students' Knowledge Achievement Scores in Participatory

Instruction versus Direct Instruction by IOM Standards

Bar Chat Comparing Students' Mean Knowledge of Participatory with Direct Instruction of Patient-Centered Care 6

5

4

3 Participatory Direct

2

1

0 Before After

Figure 5. Bar chat comparing students' mean knowledge of participatory with direct instruction of patient-centered care (N=119) 435

Bar Chat Comparing Students' Mean Knowledge Achievement of Participatory with Direct Instruction of Interdisciplinary Teams 6

5

4

3 Participatory Direct

2

1

0 Before After

Figure 6. Bar chat comparing students' mean knowledge Achievement of participatory with direct instruction of interdisciplinary teams(N=119) 436

Bar Chat Comparing Students' Mean Knowledge Achievement of Participatory with Direct Instruction of Evidence-based Practice

7.00

6.00

5.00

4.00

Participatory Direct 3.00

2.00

1.00

0.00 Before After

Figure 7. Bar chat comparing students' mean knowledge achievement of participatory with direct instruction of evidence-based practice (N=119). 437

Bar Chat Comparing Students' Mean Knowledge Acievement of Participatory with Direct Intruction of Quality Improvement 6.00

5.00

4.00

3.00 Participatory Direct

2.00

1.00

0.00 Before After

Figure 8. Bar chat comparing students' mean knowledge achievement of participatory with direct instruction of quality improvement (N=119). 438

Bar Chat Comparing Students' Mean Knowledge Achievemt of Participatory with Direct Instruction of Utilizing Informatics 5.00

4.50

4.00

3.50

3.00

2.50 Participatory Direct 2.00

1.50

1.00

0.50

0.00 Before After

Figure 9. Bar chat comparing students' mean knowledge achievement of participatory with direct instruction of utilizing informatics (N=119). 439

Bar Chat Comparing Overall Students' Mean Knowledge Achievement of Participatory with Direct Instruction 6.00

5.00

4.00

3.00 Participatory Direct

2.00

1.00

0.00 Before After

Figure 10. Bar chat comparing overall students' mean knowledge achievement of participatory with direct instruction (N=119) 440

Figure 11. Line graph comparing overall students' prior mean knowledge achievement of participatory with direct instruction, by standards (N=119) 441

Figure 12. Line graph comparing overall students' post mean knowledge achievement of participatory with direct instruction, by standards (N=119) 442

Figure 13. Line graph comparing overall students' prior mean knowledge achievement of participatory with direct instruction, by major (N=119) 443

Figure 14. Line graph comparing overall students' post mean knowledge achievement of participatory with direct instruction, by majors (N=119) 444

Figure 15. Line graph comparing overall students' change mean knowledge achievement of participatory with direct instruction, by standards (N=119) 445

Figure 16. Line graph comparing overall students' change mean knowledge achievement of participatory with direct instruction, by standards (N=119) 446

Figure 17. Line graph comparing overall students' change mean knowledge achievement of participatory with direct instruction, by major (N=119)

447

Qualitative Data Analyses

This section provides the journal and comments data analyses of some students by standard.

Journal Analysis by Standards

Selection of students’ journal for qualitative analyses

Table 70

Case ID and Extreme Pre-rated Knowledge Score (Lower and High) by Standard

(Extreme Pre-rated Knowledge Score): Cases ID Standard Low High

Rp (L2): 14, 20, 41, 43, 44, 68 (H7):16, 22, 42

Rtw L(2): 14, 57, 63, H(7): 16 H(6): 25, 34, 54, 55, 62 Re L(1): 41 H(7): 42, 52 L(2): 11, 25 Rq L(1): 1, 2, 14, 22, 23, 30, H(6): 42 36, 41, 43, 46, 49, 57, 68, Rinf L(1): 7, 9, 10, 13, 14, 16, H(5): 35, 62 18, 20, 21, 22, 23, 24, 25, 28, 29, 30, 33, 36, 37, 38, H(4): 11, 42, 69 39, 40, 41, 43, 46, 47, 48, 49, 52, 53, 54, 57, 59, 67, 68,

448

Table 71

Selected Cases for Journal Analysis by Standard

Standard

Cases Term Rp Rtw Re Rq Rinf ALL

14- Kr-14 Spring-13 √ √ √ √ 4

16- Sh-16 Spring-13 √ √ √ 3

22- Ha-22 Spring-13 √ √ √ 3

25-Ca-02 Summer-13 √ √ √ 3

41-Ya-21 Summer-21 √ √ √ √ 4

42- Ant-01 Fall-13 √ √ √ √ 4

43- Brid-02 Fall-13 √ √ √ 3

57- Rth-16 Fall-13 √ √ √ 3

68- Ldsy-09 Spring-14 √ √ √ 3

All 7 4 3 7 9

Note. √= standard represented

449

Table 72

Demographic of Selected Cases for Journal Analysis

Cases Term Major Team

14- Kr-14 Spring-13 SW Friends

16- Sh-16 Spring-13 SW Womyn

22- Ha-22 Spring-13 PT DelaBoro

25-Ca-02 Summer-13 SW Dragon

41-Ya-21 Summer-13 Nursing GTalk

42- Ant-01 Fall-13 MT Ohioans

43- Brid-02 Fall-13 MT CR3W

57- Rth-16 Fall-13 SLP CR3W

68- Ldsy-09 Spring-14 SW Fab5

450

Table 73

Extreme Values of Cases Selected for Journal Analysis by Standard

Standard

Cases Rp Rtw Re Rq Rinf %

14- Kr-14 L2 L2 L1 L1 80

16- Sh-16 H7 H7 L1 60

22- Ha-22 H7 L1 L1 60

25-Ca-02 H6 L2 L1 60

41-Ya-21 L1 L1 L1 60

42- Ant-01 H7 H7 H6 H4 80

43- Brid-02 L2 L1 L1 60

57- Rth-16 L2 L1 L1 60

68- Ldsy-09 L2 L1 L1 60

Note. Number of skills= 5. L1 = Lower extreme pre-rated knowledge score of the cases is 1; and H7 = higher extreme pre-rated knowledge score of the cases is 7.

Codes: Rp = patient-centered care; Rtw = interdisciplinary teamwork; Re = evidence- based practice; Rq = improving quality care; and Rinf = utilizing informatics

According to Patton (2002), purposeful random sampling adds credibility when potential purposeful sample is larger than one can handle; and it reduces bias within a purposeful category (p. 244). Patton provides the ideal-typical qualitative methods strategy, including qualitative data, a holistic-inductive design of naturalistic inquiry, and 451 content or case analysis. Patton reports that in a pure experimental design in-depth interviews are conducted with all participants, both those in the treatment group and those in the control group, and both before the program begins and at the end of the program.

Patton notes that content and thematic analyses are performed to compare and contrast the control and experimental group patterns (p. 250). According to Patton (2002), the differences between experimental and quasi experimental designs are that pure experiments are the ideal whereas the quasi experimental designs often represent what is possible and practical (p. 253). Patton notes that the to use thematic analysis appears to involve a number of underlying abilities or competencies; and that one competency can be called pattern recognition (p. 452). Patton defines content analysis as “searching text for recurring words or themes”; and as “analyzing text (interview transcripts, diaries, or documents) rather than observation-based field notes” (p. 453). Patton notes that patterns

(descriptive findings) or themes (take a more categorical or topical form) are the core meaning found through content analysis. Patton notes that the processes of searching for patterns or themes are patterns analysis or theme analysis respectively. Patton notes that

“there is no absolutely ‘right’ way of stating what emerges from the analysis”, and that

“there are only more and less useful ways of expressing what the data reveal” (p. 476).

Patton provides an illustrative example of changes in knowledge as follows:

“Knowledge: ‘I know about how this place was formed, its history, the rock formations, the effects of the fires on the vegetation, where the river comes from and where it goes’”

(p. 477). 452

According to Patton (2002), advocates of methodological purity argue that a single evaluator cannot be both deductive and inductive at the same time, or cannot be testing predetermined hypotheses and still remain open to whatever emerges from open- ended, phenomenological observation. Patton asserts that in practice, human reasoning is sufficiently complex and flexible and may research predetermined questions and test hypotheses about certain aspects of a program while it is quite open and naturalistic in pursuing other aspects of a program (p. 253). Patton notes that in principle, this is not greatly different from a questionnaire that includes both fixed-choice and open-ended questions.

Journal Data Content Analysis with Respect to Standards by Students

Patient-centered care. Brid02: The information about patient provider care was very useful. Our group came up with a lot of good ideas about how to handle difficult patients and clients. We agreed that it is very important for us to stay calm in these situations and make sure that the patient/client knows that they are understood and being heard. The Aurasma assignment was very frustrating for me because it would not work. My overlay would not upload no matter what I did. Finally, I deleted the app and re-downloaded it and then it worked. I made a flyer about music therapy and then recorded myself reading the flyer. My thought was that the flyer could be posted in hospitals and doctors’ offices and people apply the app if they had limited reading skills, or if they were too young to read. I am excited to hear Noah speak this week. I have actually already heard him speak twice, but it is always entertaining! 453

Ant01: I really enjoyed seeing the different professions' opinions on the antibiotic conflict assignment. It was interesting to see how differently some professions' approaches were and how similar others were.

Sh16: We also viewed a video in class which made people very angry (and giggle). The “doctor” in the video antagonized the concerned, anxious parent and did a poor job explaining the harm antibiotics could play during viral infections. The mother in the video was clearly concerned that her child was not getting better, but the main issue is that she has missed too many days at work and she is growing anxious without a conclusion to her child’s infection. The doctor could have taken more time to empathize with the mother, address and appreciate her concerns, and move on to a discussion about what steps could be taken.

Interdisciplinary teamwork. Ca02: I feel like it was probably a good representation of how some interdisciplinary team work ends up being. Whether one person decides to take on the majority of the work or whether they are forced to do so base on the situation and the expertise needed, it was definitely a lesson in learning to deal with being unable to help as much as I would like. As was also mentioned in class, it was really interesting to see the different approaches taken by the different fields. Though our input wasn’t necessarily considered in the simulation, it was interesting to hear the discussion amongst all of us of what the first course of action should be, what the next steps would be, what questions to ask, and so on and so forth. It was pretty cool to look through the SimUcases knowing that there was such in-depth simulation for each and every scenario. This kind of programming seems ground-breaking in that medical 454 professionals can practice their skills and learn which questions to ask without actually getting involved in a real case, and I hope this trend continues to grow and expand into other fields besides Speech-Language Pathology.

Sh16: My only regret is that I have not learned much about the other professions, but I am hopeful that future cohorts will be able to capitalize on the opportunity and promote interdisciplinary teamwork in healthcare professions. I will continue to serve individuals receiving Medicaid, as Medicaid is the main funder of public mental health services in the United States, and look forward to hearing about future cohorts taking this course.

Ca02: I personally had the core competency of working within interdisciplinary teams, obviously a huge one since that’s basically the whole focus of the class. I decided to keep the idea of the professional self-introduction because I feel that a team won’t work as well if roles are not clearly defined. The second thing I found that would focus on this competency is an online discussion paper about the core values of interdisciplinary teamwork in the health field. I thought this kind of thing would be perfect to go over during an actual class section so that it could become an actual discussion in the class. Different disciplines could discuss whether they feel certain values are more important in their field or not, and so on. Then, by having each group member write a reflection on the discussion, they could make much more meaning out of the information because they are relating it back to their lives.

Kr14: I had joined the call later and could not see Breanna’s invites on my iPad. She called me on the application from her iPhone and held her phone near the 455 screen, so that I could see and hear my other team members. It was not the best way to conduct an interdisciplinary team meeting. However, I think that rural practitioners must be creative to provided needed services to underserved populations, which is a personal passion of mine.

Rth16: I enjoyed hearing her experiences working on interdisciplinary teams. I also really enjoyed my time with Mrs. Wright in asking her the interview questions, and I enjoyed hearing about her experiences. I wanted to hear more stories from her, especially as her passion is with infant feeding. I focused my interview questions to be more about working on interdisciplinary teams, which still gave a lot of great information. I think it would be an awesome project to be able to interview multiple supervisors and professors to ask them about their experiences with inter-professional teams.

Evidence-based practice. I felt so much more comfortable answering the questions and giving my professional input. It is nice to know that even as a professional there will be things I am unsure about, and to know that it is okay (and actually probably a good professional practice) for me to continue researching things like official EBP statements and anything else I may not be up to date on.

Ant01: I had a little bit of trouble with the EBP assignment because I exhausted a list of search terms and couldn't find anything pertaining to music therapy and Crohn's

Disease, so I had to broaden my focus. I added that Kristen was experiencing anxiety because of the number of procedures she is undergoing and also a little bit of pain as well. This opened the door to a vast amount of research pertaining to music therapy and both pre- and postoperative pain, as well as music therapy and anxiety. 456

Quality improvement. None

Informatics. Ant01: The Explain Everything assignment was interesting. It took me awhile to get the hang of how the app worked, but I found it pretty easily accessible once I knew how it worked. I had to explain how to use it to a lot of my group because they couldn't figure it out, which made me want to make my video out on the

Explain Everything app. Instead I decided to make it on the Google Hangouts app because it is one that we use a lot in the class. I had never used the app because I usually just do it from my laptop, so this was a good idea for me. I found it a really user-friendly app with some nice features.

Comment data analysis with respect to standards by students at the end of the course

Patient-centered care knowledge. Kr14: I have a grasp on patient-centered care, yet there was little focus on defining patient-centered care. Experientially, my team learned what it means to provide patient-centered care. Sh16: Our group focused on client-centered approaches with each scenario. Ha22: The patient and their needs and well-being are at the center of all decisions made. At HCP we need to make sure their voice is hear because they are getting treatment on their body. Brid02: Patient-centered care is something I knew about before this class but now I know effective ways to implement patient centered care as an interdisciplinary team.

Interdisciplinary teamwork knowledge. Kr14: I know the basic principles of interdisciplinary teamwork as well as many of the nuances involved with interfacing with specific professions. Sh16: Our team did an excellent job communicating but in my 457 professional life, I work with an inter-professional team and our communication needs to be better. Ha22: The team works through the entire case together from getting the history to deciding what we will do next. Communicate and have roundtable discussion.

Do individual research on cases specific to your profession and share this info with the team and how you specifically can help out. ID teamwork helps avoid overlap in care.

Rth16: I realized that it is important to work with different professions to enhance the patient's care. In our group projects, it was very helpful to work with five other professions and it gave me a better sense of how and when to work with them.

Evidence-based practice knowledge. Ca02: I now feel much more comfortable in my ability to find research that lends to EBP and how to find professional organizations EBP statements. Ya21: Decisions made are based on research studies.

Brid02: I really enjoyed learning about PubMed and have used it this semester in other classes. Rth16: It was nice to see that every profession has to utilize EBP, and provide the patients with care based on the best evidence, clinical experience, and patient values.

Quality improvement knowledge. Kr14: Interdisciplinary teamwork enhanced quality improvement in that a team is able to evaluate one another both explicitly and implicitly. Sh16: I don't think we really learned about this. Ha22: How can we prevent diseases and spend less money by overlapping care in treatment. Prevention saves so much money. IP teams work to improve the effectiveness, efficient and safety of delivering patient care. Ca02: I feel I learned a lot on this topic through all of our assignments critiquing videos of patient interactions. Ya21: Correct errors in time.

Brid02: I know more about quality improvement, but am still unsure of real 458 implementations for it. Rth16: Quality improvement is important for the safety and well-being of patients. It involves assessing the hazards and interventions that are presented to the patient, and making the appropriate changes to resolve any problems. I got a basic understanding of this core competency in this class. I think actually applying quality improvement is harder to imagine, but I think I can at least do this with the interventions that I provide to my patients. I also think I can work with other professionals to apply quality improvement for patients in nursing homes, where the oral hygiene and hygiene in general can be stressed a bit more.

Informatics knowledge. Kr14 notes that “I know more than I did, when I began the course but I feel that more emphasis would have made the learning experience more unique in comparison to learning about the other core competencies which I at least had some prior knowledge about.” Sh16 reported that “I don't think we really learned much about this.” Ha22 expresses her feeling that “ Using technology to communicate and find out information. Using the ipads, Google documents and hangout, using apps, and

EBP site to help find the best treatment for the patient.” Ca02 notes “Technology and I have never gotten along, but I learned more about google+ and all of its features.” Ya21 reports that “Use iPad apps, Google document.” Brid02 notes “Before this class I had never really heard of informatics. Now I feel like I am fairly confident on what they are and how to read them.” Rth16 reports that “Informatics involves utilizing technology in order to manage information, and I think I will need to study this more as technology continues to develop. Also, different facilities utilize different forms of informatics, and because of this class, I will try to be more cognizant of how we are utilizing informatics 459 to improve our care.” Ldsy09 note “I wish I could have learned a few more hands on skills about this but I got really good at working the blog.

460

Appendix H. Literature Tables

Table 74 Learning Theories

Property Behaviorism Cognitivism Constructivism Connectivism

How Black box- Structured, Social, meaning Distributed within a learning observable computational created by each network, social, occurs behavior main learner technologically

focus (personal) enhanced,

recognizing and

interpreting patterns

Influencing Nature of Existing Engagement, Diversity of factors reward, schema, participation, network, strength of

punishment, previous social, cultural ties

stimuli experiences

461

Table 74 (continued)

Learning Theories

Property Behaviorism Cognitivism Constructivism Connectivism

Role of Memory is the Encoding, Prior knowledge Adaptive memory hardwiring of storage, remixed to patterns,

repeated retrieval current context representative of

experiences-where current state,

reward and existing in

punishment are networks

most influential

How Stimulus, response Duplicating Socialization Connecting to transfer knowledge (adding) nodes occurs constructs of

“knower”

Types of Task-based Reasoning, Social vague Complex learning learning clear (“ill defined”) learning, rapid best objectives, changing core, explained problem diverse

solving knowledge

sources

Source. Siemens (2008, p. 11) from http://www.unigaia-brasil.org/pdfs/educacao/Siemens.pdf

462

Table 75

Elements of the Worldviews and Implications for Practice

Worldview Postpositivism Constructivis Participatory Pragmatism Element m Ontology Singular Multiple Political Singular and (What is the reality (e.g., realities (e.g., reality (e.g., multiple realities nature of researchers researchers findings are (e.g., researchers reality?) reject or fail to provide negotiated test hypotheses and reject quotes to with provide multiple hypotheses) illustrate participants) perspectives) different perspectives) Epistemology Distance and Closeness Collaboration Practically (e.g., (What is the impartiality (e.g., (e.g., researchers collect relationship (e.g., researchers researchers data by “what between the researchers visit actively works” to address researcher and objectively participants at involve research question) that being collect data on their sites to participants researched?) instruments) collect data) as collaborators) Axiology Unbiased (e.g., Biased (e.g., Negotiated Multiple stances (What is the researchers use researchers (e.g., (e.g., researchers role of checks to actively talk researchers include both biased values?) eliminate bias) about their negotiate and unbiased biases and their biases perspectives) interpretation with s) participants)

463

Table 75 (continued)

Methodolog Deductive Inductive (e.g., Participatory Combining (e.g., y (What is (e.g., researchers researchers (e.g., researchers the process test an a priori start with researchers collect both of research?)theory) participants’ involve quantitative and

views and participants in qualitative data

build “up” to all stages of the and mix them)

patterns, research and

theories, and engage in

generalizations cyclical reviews

) of results)

Rhetoric Formal style Informal style Advocacy and Formal or (What is the (e.g., (e.g., change (e.g., informal (e.g., language of researchers use researchers researchers use researchers may research?) agreed-on write in a language that employ both definitions of literary, will help bring formal and variables) informal style) about change informal styles of and advocate for writing) participants) Source. Creswell & Plano Clark, 2011, p. 42.

464

Table 76

Four Worldviews

Postpositivism Constructivism

Determination Understanding

Reductionism Multiple participant meanings

Empirical observation and measurement Social and historical construction

Theory verification Theory generation

Transformative Pragmatism

Political Consequences of actions

Power and justice oriented Problem-centered

Collaborative Pluralistic

Change-oriented Real-world practice oriented

Source. Creswell (2014, p. 6).

465

Table 77

Basic Characteristics of Four Worldviews Used in Research

Postpositivist Constructivist Participatory Pragmatist

Worldview Worldview Worldview Worldview

Determination Understanding Political Consequences of

actions

Reductionism Multiple participant Empowerment and Problem centered

meanings issue oriented

Empirical Social and Collaborative Pluralistic observation and historical measurement construction

Theory verification Theory generation Change oriented Real-world practice

oriented

Source. Creswell (2009; as cited in Creswell & Plano Clark, 2011, p. 40).

466

Table 78

ALL Model for Understanding Teamwork

Attitudes and Skills Experience Attitudes and Group Decision Adaptability/Flexibility Interpersonal Dispositions Making/Planning Provide assistance Relations Experiences Identify problems Reallocate tasks Share the work Implicit Gather information Provide/Accept Seek mutually Theories About Evaluate feedback agreeable solution Teamwork information Monitor/Adjust Consider different Share information performance ways of doing Understand things decisions Manage/Influence Set goals disputes Communication Provide clear and accurate information Listen effectively Ask questions Acknowledge requests for information Openly share ideas Pay attention to non-verbal behaviors Source. Baker, Horvath, Campion, Offermann, & Salas, (n.d, p. 9)

467

Table 79

Gagné’s Eight Distinctive Types of Learning

Type 1 Signal learning. The individual learns to make a general, diffuse

response to a signal. This is the classical conditioned response of Pavlov.

Type 2 Stimulus-response learning. The learner acquires a precise response to

a discriminated stimulus. What is learned is a connection (Thorndike) or

a discriminated operant (Skinner), sometimes called an instrumental

response (Kimble).

Type 3 Chaining. What is acquired is a chain of two or more stimulus-response

connections. The conditions for such learning have been described by

Skinner and others.

Type 4 Verbal association. Verbal association is the learning of chains that are

verbal. Basically, the conditions resemble those for other (motor) chains.

However, the presence of language in the human being makes this a

special type because internal links may be selected from the individual’s

previously learned repertoire of language.

468

Table 79 (continued)

Type 5 Multiple discrimination. The individual learns to make different

identifying responses to as many different stimuli, which may resemble

each other in physical appearance to a greater or lesser degree.

Type 6 Concept learning. The learner acquires a capability to make a common

response to a class of stimuli that may differ from each other widely in

physical appearance. He or she is able to make a response that identifies

an entire class of objects or events.

Type 7 Principle learning. In simplest terms, a principle is a chain of two or

more concepts. It functions to control behavior in the manner suggested

by a verbalized rule of the form “If A, then B,” which, of course, may

also be learned as Type 4.

Type 8 Problem solving. Problem solving is a kind of learning that requires the

internal events usually called thinking. Two or more previously acquired

principles are somehow combined to produce a new capability that can

be shown to depend on a “higher-order” principle (pp. 58-59).

Source. Knowles et al. (2012, p. 79)

469

Table 80

The Role of the Teacher

Conditions of Learning Principles of Teaching

The learners feel a need 1. The teacher exposes students to new possibilities of self- to learn. fulfillment. 2. The teacher helps each student clarify his own aspirations for improved behavior. 3. The teacher helps each student diagnose the gap between his aspiration and his present level of performance. 4. The teacher helps the students identify the life problems they experience because of the gaps in their personal equipment. The learning 5. The teacher provides physical conditions that are environment is comfortable (as to seating, smoking, temperature, characterized by ventilation, lighting, decoration) and conducive to interaction physical comfort, (preferably, no person sitting behind another person). mutual trust and 6. The teacher accepts each student as a person of worth and respect, mutual respects his feelings and ideas. helpfulness, freedom of 7. The teacher seeks to build relationships of mutual trust expression, and and helpfulness among the students by encouraging acceptance of cooperative activities and refraining from inducing differences. competitiveness and judgmentalness. 8. The teacher exposes his own feelings and contributes his resources as a co-learner in the spirit of mutual inquiry

470

Table 80 (continued)

Conditions of Learning Principles of Teaching

The learners perceive the goals of 9. The teacher involves the students in a mutual learning experience to be their process of formulating learning objectives in which goals. the needs of the students, of the institution, of the

teacher, of the subject matter, and of the society are

taken into account.

The learners accept a share of the 10. The teacher shares his thinking about options responsibility for planning and available in the designing of learning experiences operating a learning experience, and the selection of materials and methods and and therefore have a feeling of involves the students in deciding among these commitment toward it. The options jointly. learners participate actively in 11. The teacher helps the students to organize the learning process. themselves (project groups, learning-teaching

teams, independent study) to share responsibility in

the process of mutual inquiry.

471

Table 80 (continued)

Conditions of Learning Principles of Teaching

The learning process is related 12. The teacher helps the students exploit to and makes use of the their own experiences as resources for experience of the learners. learning through the use of such techniques

as discussion, role playing, and case

method.

13. The teacher gears the presentation of his

own resources to the levels of experience of

his particular students.

14. The teacher helps the students to apply

new learning to their experience, and thus to

make the meaningful and integrated.

The learners have a sense of 15. The teacher involves the students in progress toward their goals. developing mutually acceptable criteria and

methods for measuring progress toward the

learning objectives.

16. The teacher helps the students develop

and apply procedures for self-evaluation

according to these criteria.

Source. Knowles et al. (2012, pp. 91-93).

472

Table 81 Some Characteristics of Static Versus Innovative Organizations Dimensions Characteristics Static Organizations Innovative Organizations Structure Rigid-much energy given to Flexible-much use of temporary maintaining permanent task forces; easy shifting of departments, committees; departmental lines; readiness to reverence for tradition, change constitution; depart from constitution and by-laws. tradition. Hierarchical-adherence to Multiple linkages based on chain of command. functional collaboration. Roles defined narrowly. Roles defined broadly. Property-bound. Property-mobile. Atmosphere Task-centered, impersonal. People-centered, caring. Cold, formal, reserved. Warm, informal, intimate. Suspicious. Trusting Management Function of management is to Function of management is to control personnel through release the energy of personnel; coercive power. power is used supportively.

473

Table 81 (continued) Dimensions Characteristics Static Organizations Innovative Organizations Philosophy and Cautious-low risk-taking. Experimental-high risk-taking. Attitudes Attitude towards errors: to be Attitude toward errors: to be avoided. learned from. Emphasis on personnel Emphasis on personnel selection. development. Self-sufficiency-closed system Interdependency-open system regarding sharing resources. regarding sharing resources. Emphasis on conserving Emphasis on developing and using resources. resources. Low tolerance for ambiguity. High tolerance for ambiguity. Decision High participation at top, low Relevant participation by all those making and at bottom. affected. Policy making Clear distinction between Collaborative policy making and policy making and execution. policy execution. Decision making by legal Decision making by problem mechanisms. solving. Decisions treated as final. Decisions treated as hypotheses to be tested. Communication Flow restricted. Open flow-easy access. On-way- downward. Multidirectional-up, down, Feelings repressed or hidden. sideways. Feelings expressed. Source. Knowles et al. (2012, pp.110-111).

474

Table 82

Process Elements of Andragogy

Element Pedagogical Approach Andragogical Approach

1. Preparing Learners Minimal Provide information

Prepare for participation

Help develop realistic

expectations

Begin thinking about

content

2. Climate Authority-oriented Relaxed, trusting

Formal Mutually respectful

Competitive Informal, warm

Collaborative, supportive

Openness and authenticity

3. Planning By teacher Mechanism for mutual

planning by learners and

facilitator

4. Diagnosis of needs By teacher By mutual assessment

475

Table 82 (continued)

Element Pedagogical Approach Andragogical Approach

5. Setting of objectives By teacher By mutual negotiation

6. Designing learning Logic of subject matter Sequenced by readiness plans Content units Problem units

7. Learning activities Transmittal techniques Experiential techniques

(inquiry)

8. Evaluation By teacher Mutual re-diagnosis of

needs

Mutual measurement of

program.

Source. Knowles et al. (2012, p. 115).

476

Table 83 Grow’s Stages in Learning Autonomy Stage Student Teacher Examples Stage 1 Dependent Authority Coaching with immediate feedback, drill. Informational lecture. Overcoming deficiencies and resistance. Stage 2 Interested Motivator, guide Inspiring lecture plus guided discussion. Goal- setting and learning strategies. Stage 3 Involved Facilitator Discussion facilitated by teacher who participates as equal. Seminar. Group projects. Stage 4 Self-directed Consultant, Internship, delegator dissertation, individual work or self-directed study group. Source. Knowles et al. (2012, p. 185).

477

Table 84

Learning Styles

A. Perceptual Styles

1. Visual-initial reaction to information is visual

2. Auditory-initial reaction to information is auditory

3. Emotive- initial reaction to information is emotive

B. Cognitive Styles

4. Analytic- identifies critical elements of a problem

5. Spatial- identifies shapes and objects in mental space

6. Discrimination- visualizes important elements of task

7. Categorization- uses reasonable criteria for classifying information

8. Sequential processing-process information sequentially

9. Simultaneous processing-process information visuospatially

10. Memory-retains information

11. Verbal-spatial preferences; choice of verbal or nonverbal

12. Persistence-willingness to finish work

13. Verbal risk-willingness to express opinions

14. Manipulative-desire for “hands on” activities

478

Table 84 (continued)

15. Study time preference (early morning)

16. Study time preference (late morning)

17. Study time preference (afternoon)

18. Study time preference (evening)

19. Grouping preference-desire to learn in a whole class versus dyadic grouping

20. Posture preference-desire for formal versus informal study

21. Mobility preference-desire for taking breaks while studying

22. Sound preference-desire to study in silence versus study with background sound.

23. Lighting preference-desire for bright or lower lighting

24. Temperature preference-desire for cool versus warm environments

Source. National Association of Secondary School Principals Learning Styles Task Force

(1983, p. 1; as cited in Ducette, Sewell, & Shapiro, 1996, p. 334)

479

Appendix I. First Data Cleaning-N93

Distribution of Data

Before choosing statistical tools with which to analyze the data retrieved, the distribution of the data was investigated. This was to help the researcher determine if the data were reasonably normal. Table 85 and Figure 16 show a

Table 85

Test of Normality: Distribution of Scores

Statistic ipAch fpAch cpAch

Mean 19.57 26.05

Median 20.00 26.00

Minimum 10 19

Maximum 28 35

Skewness -0.21 0.18

Kurtosis 0.13 -0.11

Kolmogorov- 0.08 (ns) 0.11 (s)

Smirnov

Shapiro-Wilk 0.98 (ns) 0.97 (s)

480

Figure 18. Histogram of the normal curve displaying initial perceived achievement scores.

Figure 19. Graph of normality test for the Q-Q plot. 481

Figure 20. Box plot of normality test for the initial perceived achievement score, by instructional types 482

Figure 21. Box plot of normality test for the initial perceived achievement score, by students’ major 483

Figure 22. Histogram of the normal curve displaying initial perceived achievement scores with direct instruction 484

Figure 23. Histogram of the normal curve displaying initial perceived achievement scores with participatory instruction

485

Figure 24. Box plot of normality test for the initial perceived achievement score, by student’s major with participatory instruction 486

Figure 25. Box plot of normality test for the initial perceived achievement score, by student’s major with direct instruction 487

Figure 26. Graph of normality test for the Q-Q plot of initial perceived achievement scores with BSN students 488

Figure 27. Graph of normality test for the Q-Q plot of initial perceived achievement scores with PT students 489

Figure 28. Graph of normality test for the Q-Q plot of initial perceived achievement scores with nutrition students 490

Figure 29. Graph of normality test for the Q-Q plot of initial perceived achievement scores with SLP students 491

Figure 30. Graph of normality test for the Q-Q plot of initial perceived achievement scores with SW students 492

Figure 31. Graph of normality test for the Q-Q plot of initial perceived achievement scores with ‘Others’ 493

Figure 32. Histogram of the normal curve displaying final perceived achievement scores 494

Figure 33. Graph of normality test for the Q-Q plot of final perceived achievement scores 495

Figure 34. Box plot of normality test for the final perceived achievement score, by instructional types 496

Figure 35. Histogram of the normal curve displaying final perceived achievement scores, by major 497

Figure 36. Histogram of the normal curve displaying initial perceived achievement scores with student’s major 498

Figure 37. Box plot of normality test for the final perceived achievement score, by student’s major 499

Figure 38. Box plot of normality test for the final perceived achievement score, by student’s major with participatory instruction 500

Figure 39. Box plot of normality test for the final perceived achievement score, by student’s major with direct instruction 501

Figure 40. Graph of normality test for the Q-Q plot of final perceived achievement scores with BSN students 502

Figure 41. Graph of normality test for the Q-Q plot of final perceived achievement scores with PT students 503

Figure 42. Graph of normality test for the Q-Q plot of final perceived achievement scores with nutrition students 504

Figure 43. Graph of normality test for the Q-Q plot of final perceived achievement scores with SLP students 505

Figure 44. Graph of normality test for the Q-Q plot of final perceived achievement scores with SW students 506

Figure 45. Graph of normality test for the Q-Q plot of final perceived achievement scores with ‘Others’ students (Others comprised MD, MT, CFS, and Audiology) 507

Figure 46. Box plot of normality test for the change perceived achievement score, by instructional types 508

Figure 47. Box plot of normality test for the change perceived achievement score, by gender 509

Figure 48. Box plot of normality test for the final perceived achievement score, by inter- professional teams 510

Figure 49. Box plot of normality test for the change perceived achievement score, by team preferences 511

Figure 50. Box plot of normality test for the change perceived achievement score, by status 512

Figure 51. Box plot of normality test for the change perceived achievement score, by student’s major

513

Appendix J. Second Data Cleaning-N90

Table 86

Test of Normality: Distribution of Scores, Scale Variable

Descriptive K-S S-W

Variable Mean Median Sk Ku Stat df p Stat df p ipAch 19.60 20.00 -.14 .17 .09 90 .06 .98 90 .26 fpAch 26.08 26.00 .20 -.03 .10 90 .02 .97 90 .04 cpAch 6.48 6.00 .30 .22 .09 90 .09 .98 90 .25

Age 23.58 23.00 2.51 7.90 .29 90 .001 .73 90 .001

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics

514

Table 87

Test of Normality: Distribution of Scores, by Method

Variable Descriptive K-S S-W

Method Mean Median Sk Ku Stat df p Stat df p ipAch Part 19.12 20.00 -.30 -.50 .13 40 .10 .97 40 .36

Direct 19.98 19.00 .75 .03 .18 50 .001 .94 50 .01 fpAch Part 27.45 27.50 .11 .61 .15 40 .03 .97 40 .41

Direct 24.98 25.00 .34 -.03 .12 50 .06 .96 50 .06 cpAch Part 8.32 7.50 .27 -.50 .11 40 .20 .97 40 .37

Direct 5.00 5.00 -.36 -.14 .10 50 .20 .97 50 .19

Age Part 24.28 23 2.12 4.40 .33 40 .001 .73 40 .001

Direct 23.02 23 1.57 4.77 .19 50 .001 .87 50 .001

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics

515

Table 88

Test of Normality: Distribution of Scores, by Gender

Variable Descriptive K-S S-W

Gender Mean Median Sk Ku Stat df p Stat df p ipAch Female 19.34 20.00 -.41 .25 .09 73 .20 .97 73 .13

Male 20.71 20.00 -.09 -.55 .11 17 .20 .96 17 .65 fpAch Female 25.90 26.00 .16 .04 .11 73 .03 .97 73 .06

Male 26.82 27.00 .17 -.15 .14 17 .20 .97 17 .82 cpAch Female 6.56 6.00 .26 .20 .09 73 .20 .99 73 .61

Male 6.12 6.00 .52 .62 .12 17 .20 .95 17 .51

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics

516

Table 89

Test of Normality: Distribution of Scores, by Status

Variable Descriptive K-S S-W

Status Mean Median Sk Ku Stat df p Stat df p ipAch Ungrad 20.57 20.00 .30 -.98 .12 23 .20 .95 23 .32

Grad 19.27 19.00 -.45 .46 .10 67 .10 .97 67 .08 fpAch Ungrad 25.96 25.00 -.10 -.67 .13 23 .20 .95 23 .29

Grad 26.12 26.00 .22 .16 .11 67 .04 .97 67 .06 cpAch Ungrad 5.39 5.00 -.39 -.41 .14 23 .20 .96 23 .39

Grad 6.85 6.00 .35 .10 .10 67 .17 .98 67 .28

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics

517

Table 90

Test of Normality: Distribution of Scores, by Team Preference

Variable Descriptive K-S S-W

tpref Mean Median Sk Ku Stat df p Stat df p ipAch Wktm 19.63 19.00 .10 .01 .10 63 .20 .98 63 .41

Wkaln 19.52 20.00 -.62 .67 .17 27 .05 .95 27 .22 fpAch Wktm 25.87 26.00 .33 .17 .13 63 .007 .96 63 .03

Wkaln 26.56 27.00 -.12 .01 .14 27 .20 .97 27 .51 cpAch Wktm 6.24 6.00 .21 -.06 .08 63 .20 .99 63 .67

Wkaln 7.04 6.00 .33 .44 .13 27 .20 .96 27 .45

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics

518

Figure 52. Box plot of normality test for the change perceived achievement score, by student’s team preferences 519

Figure 53. Box plot of normality test for the change perceived achievement score, by student’s team preferences with participatory instruction 520

Figure 54. Box plot of normality test for the change perceived achievement score, by student’s team preferences with direct instruction

521

Table 91 Test of Normality: Distribution of Scores, by Major Variable Descriptive K-S S-W

Major Mean Median Sk Ku Stat df p Stat df p ipAch BSN 20.87 20.00 .30 -.96 .15 15 .20 .95 15 .56

PT 20.88 21.00 -.64 .51 .15 16 .20 .94 16 .39

NUT 20.00 20.00 .52 -.46 .19 13 .20 .95 13 .57

SLP 17.26 17.00 -.19 .15 .15 19 .20 .98 19 .95

SW 19.71 19.50 -1.14 2.84 .18 14 .20 .90 14 .11

Others 19.46 20.00 -.35 -.25 .16 13 .20 .95 13 .59 fpAch BSN 25.80 25.00 -.23 -.84 .14 15 .20 .93 15 .23

PT 26.50 27.50 -.79 -.20 .18 16 .18 .92 16 .17

NUT 24.77 25.00 .37 -.42 .16 13 .20 .95 13 .63

SLP 26.00 26.00 .57 .76 .18 19 .09 .94 19 .26

SW 25.93 26.00 .30 .79 .16 14 .20 .95 14 .59

Others 27.46 27.00 -.07 -.32 .15 13 .20 .94 13 .41 cpAch BSN 4.93 5.00 -.49 -.53 .11 15 .20 .95 15 .50

PT 5.62 6.00 -.60 .41 .11 16 .20 .95 16 .48

NUT 4.77 5.00 .09 .31 .16 13 .20 .95 13 .54

SLP 8.74 9.00 -.35 -.25 .11 19 .20 .98 19 .93

SW 6.21 6.50 .64 .86 .23 14 .04 .92 14 .21

Others 8.00 8.00 .30 -.35 .12 13 .20 .96 13 .74

Note. Sk = Skewness; Ku = Kurtosis; K-S = Kolmogorov-Smirnov; S-W = Shapiro-Wilk;

Stat = Statistics 522

Figure 55. Box plot of normality test for the change perceived achievement score, by student’s major with instructional types 523

Figure 56. Box plot of normality test for the change perceived achievement score, by student’s major with participatory instruction 524

Figure 57. Box plot of normality test for the change perceived achievement score, by student’s major with direct instruction

525

Univariate Outliers Table 92 Descriptive Statistics for Initial Perceived Achievement, Final Perceived Achievement, and Perceived Achievement Change. Descriptive ipAch fpAch cpAch Mean 19.60 26.08 6.48 Std. Error of Mean .397 .377 .469 95%CI for Mean Lower B 18.81 25.33 5.55 Upper B 20.39 26.83 7.41 5% Trimmed Mean 19.65 25.99 6.40 Median 20.00 26.00 6.00 Variance 14.18 12.81 19.80 Std. Dev 3.77 3.58 4.45 Minimum 10 19 -4 Maximum 28 35 18 Range 18 16 22 Interquartile range 5 4 6 Skewness -.14 .20 .30 Std.Error of Skewness .254 .254 .254 Kurtosis .17 -.03 .22 Std.Error of Kurtosis .503 .503 .503 K-S stats .09 .10 .09 df 90 90 90 p .06 .02 .09 S-W stats .98 .97 .98 df 90 90 90 p .26 .04 .25

526

Qualitative Data Analyses

This section provides the journal and comments data analyses of six students, by standard.

Selection of Students’ Journal for Qualitative Analyses

Table 93

Outliers, Cases, and Scores for various Quantitative and Categorical Variables

Category ipAch fpAch cpAch

Major #26 (10, SW) #31 (35, SLP); #67 (34, #26 (18, SW); #33 (16,

K-S=.18, df=14, SLP); #9 (19, SLP) SW); #54 (-4, SW)

p=.20 K-S=.18, df=19, p=.09 K-S=.23, df=14, p=.04

S-W=.90, S-W=.94, df=19, p=.26 S-W=.92, df=14, p=.21

df=14, p=.11

Gender - #33 (35, F) #31 (17, M), #4 (16, M);

K-S=.11, df=73, p=.03 #71 (-3, M)

S-W=.97, df=73, p=.05 K-S=.12, df=17, p=.20

S-W=.95, df=17, p=.51

527

Table 93 (continued)

Category ipAch fpAch cpAch Inter- #1 (27, Ohio); #7 (18, #13 (31, The #45 (8, Health Adv); #55 profession Ohio); #34 (24, Clinical Crew); (11, Health Crusaders); Teams MIJEL); #42 (23, #15 (25, The #54 (10, Interdisciplinary Health Adv); #54 (25, Clinical Crew); Dreams) Interdis dream); #71 #67 (34, Code (28, Code Blue) Blue)

Team Pref #1 (27, Wkaln); #30 #33 (35, Wktms) #26 (18, Wkaln) (11, Wkaln); #26 (10, K-S=.13, df=63, K-S=.13, df=27, p=.20 Wkaln) p=.007 S-W=.96, df=27, p=.45 K-S=.17, df=27, p=.05 S-W=.96, df=63, S-W=.95, df=27, p=.22 p=.03 Status - #12 (35, Grad); #26 (18, Grad) K-S=.119, df=23, p=.20 #33 (35, Grad) K-S=.10, df=67, p=.17 S-W=.95, df=23, p=.32 K-S=.11, df=67, S-W=.98, df=67, p=.28 p=.04 S-W=.96, df=67, p=.06 Method K-S=.09, df=90, p=.06 K-S=.10, df=90, K-S=.09, df=90, p=.09 S-W=.98, df=90, p=.26 p=.02 S-W=.98, df=90, p=.25 S-W=.97, df=90, p=.04

528

Table 94

Selection of Cases Using Boxplots on Initial Perceived Achievement Scores, by

Instructional Method and Inter-profession Teams

Instructional Method

Cohort Participatory Coding ipAch Score Direct Coding ipAchScore

Fall #1 (boxplot) PA 27 #42 (boxplot) DA 23

Fall #2 (deleted) PB #52 (deleted) DB

Fall #7 (boxplot) PC 18 #54 (boxplot) DC 25

529

Figure 58. Boxplots showing cases ids of outliers of initial perceived achievement scores, by inter-profession teams 530

Figure 59. Boxplots showing cases ids of outliers of initial perceived achievement scores, by Major

531

Table 95

Variables, Variable Codes, Dummy Codes, and Dummy variables

Variable Variable Dummy code Dummy variable score code Instructional trtP (1) dtrtP = 1, else = 0 Compute trtP-ipAch type trtD (2) Recode 2=1, dtrtD = 1, Compute trtD-ipAch else = 0 Major BSN (1) dBSN = 1, Else = 0 Compute dBSN-ipAch PT (2) Recode 2=1, dPT =1, Compute dPT-ipAch Else = 0 NUT (3) Recode 3=1, dNUT = 1, Compute dNUT-ipAch Else =0 SLP (4) Recode 4 = 1, dSLP = 1, Compute dSLP-ipAch Else = 0 SW (5) Recode 5 = 1, dSW = 1, Compute dSW-ipAch Else = 0 (dSWiAch) Others (6) Recode 6 = 1, dOther = Compute dOther- 1, Else = 0 ipAch(dOiAch) Status Ugrad (1) dUgrad = 1, Else = 0 Compute dUgrad-ipAch (dUgdiAch) Grad (2) Recode 2 = 1, dGrad = Compute dGrad-ipAch 1, Else = 0 (dGdiAch)

532

Table 95 (continued)

Variable Variable Dummy code Dummy variable score code Team Wkintm dWkintm = 1, Else = 0 Compute dWkintm-ipAch preference (1) (dWktmiAch) Wkalon (2) Recode 2 = 1, dWkalon Compute dWkalon-ipAch = 1, Else = 0 (dWklniAch) Inter-profession Team A dtmA = 1, Else = 0 Compute dA-ipAch team (1) Team B Recode 2 = 1, dtmB = Compute dB-ipAch (2) 1, Else =0 Team C Recode 3 = 1, dtmC = Compute dC-ipAch (3) 1, Else =0 Team D Recode 4 = 1, dtmD = Compute dD-ipAch (4) 1, Else =0 Team E (5) Recode 5 = 1, dtmE = Compute dE-ipAch 1, Else =0 Team F (6) Recode 6 = 1, dtmF = 1, Compute dF-ipAch Else =0 Team G Recode 7 = 1, dtmG = Compute dG-ipAch (7) 1, Else =0 Team H Recode 8 = 1, dtmH = Compute dH-ipAch (8) 1, Else =0 Team K Recode 9 = 1, dtmK = Compute dK-ipAch (9) 1, Else =0

533

Appendix K. Analysis of Students’ Journal Reflections

The following reports were sentiments expressed from six HSP students who were selected on the basis of their initial perceived achievement and self-concept scores on IOM standards with the mind that their journal reflections might help explaining the findings of the quantitative data in phase 1.

Patient-Centered Care

According to PB, an interesting aspect of patient-centered care was how to deliver bad news to the patients and their families.

This week we worked on a few assignments as a group. We met at the Front

Room in Baker Center to work on the Diabetes assignment where we watched a

YouTube video and then made comments on what we thought the Doctor could

have improved on. As a music therapist, I will not be the one giving patients and

family members bad news about a diagnosis or prognosis, but I will often be

dealing with the healing process after hearing bad news. It was an interesting task

to think about how my colleagues will make the deliverance of bad news easier

for the patient to understand. We also had a lot of fun filming our own simulation

of how a doctor should deliver bad news.

PC also noted an interesting aspect of health care was working collaboratively with different professionals:

I was excited to learn about all the different professions I would be working with.

Collaboration is a very interesting topic, and something that I feel the professional

world does not do enough. For the most part, in most environments, we stick to 534

our own “crew” and are often hesitant, intimidated, or too prideful to ask for help

or another perspective outside our scope of practice. We ultimately came to the

conclusion to put her in the riftin and corner chair-huge difference! Her posture

was much easier to maintain, and her utterances were expanding immensely. I

explain this scenario because it was then I truly saw IP collaboration first hand,

and saw the benefits of it. Having the small experience made me less

“intimidated” to meet the other professions within the class.

Interdisciplinary Teamwork

On the interdisciplinary teamwork standard, PB claimed that she was stressed; that the instructor cared for students; that the assignments were helpful; and that she received useful feedback from her teammates:

I enjoyed working with my group on the "dog bite" assignment. I was glad to

have group members for that assignment because, as a music therapy student, I

had the least experience with clinical protocol then the rest of my group. Ch05,

the medical student and Mat14, the nursing student, knew a lot about what the

doctor and nurse did wrong. I was just going off of my personal experiences and

what I thought I would feel like if I was that mother or patient. It was definitely

helpful to have group members who knew all of the technicalities of what they

should have been doing. It was also nice to hear Kat09 speak from a social

worker's point of view because she had some good ideas about how to make the

mom and patient feel comforted. I also enjoyed doing the expert opinion

assignment. I discovered this link on the American Music Therapy Association 535 website that had a bunch of podcasts by board certified music therapists. The podcasts covered basically every topic from the professional side of music therapy to using music and movement with the elderly. It was really cool to be able to listen to a few of them. I really enjoyed finding a podcast about music therapy in the NICU. I am trying to get an internship, which I will be able to work in a NICU, so it was really interesting for me and led me to looking for other information on music therapy for preemies. From there I found a really interesting YouTube video about using lullabies with preemies to increase bonding between mother and child. It was really cool to see the music in action instead of just hearing it. The comments we received while presenting were really helpful. Everyone seemed to like what we had started on. We also got some good feedback about how to incorporate material from our first scenario into the rest of the iBook. I think that the feedback was great, but I am not sure if we are going to have time to incorporate everything into our iBook. I felt bad that

[instructor] seemed to feel bad that we were all really stressed out. He calmed my nerves a lot just by saying that this project is not meant to be a giant stressful assignment and that everyone was going to get it done. I’m glad that [instructor] at least cares that we are stressed and was willing to work with us on the requirements because a lot of my professors could care less if their assignments are too much of if we are not going to have time to get everything done. 536

PC claimed that she was intimidated; she reported that self- branding was challenging; that she built her profile, interacted and learned about other professionals; and that her stereotype had reduced.

I will admit that doctors are extremely intimidating! I cannot help but think about

all they know, and the extensive knowledge they have about so many things...I

cannot help but be intimidated. However, Ch05 was very down to earth and easy

to talk to. Already one of my stereotypes had started to diminish. I am most

excited to work with other professions in the hopes that I will be less hesitant to

approach and interact with other professionals when I am out in the field. I

thought the articles about “professional branding” were helpful. It is hard to think

of yourself as a brand you are selling to companies and other professionals, but

that is the world we live in! So many people are competing for one position, just

one. I haven't really thought about joining groups, so, this assignment actually

helped build my profile in that sense. I really found the LinkedIn PDF helpful!

The information was in a concise format and easy to read. It was straightforward

and offered examples when necessary. Setting up a Twitter account is relatively

easy—I never thought to use it professionally, though. I have a personal one and

do not even use that; it will take some practice checking into this account to catch

up of discussion posts, threads that may be relevant to speech language

pathologists. I feel like we are finally starting to collaborate as a group. I am

getting to know each profession, and we are breaking down some stereotypes and

barriers we might have initially held onto. The major swap assignment was not 537 extremely difficult but it took me a little bit of time to make sure I said the correct things--where to find the information? I “major-swapped” with Ch05, the medical student. He helped explain the difference on a DO and MD. He chose to

“major-swap” with SLP because he truly knows very little (his confession). It was interesting what he included in the video. In my school/SLP class, we just spoke about the memetic theory and why some memes are not successful. In our field, language is an extremely unsuccessful meme. The general public assumes we work on speech and speech alone, we work on speech “impediments”, and help children “enunciate”. And what did Ch05 say? Impediments and speech...it is so interesting! How are we to expand the knowledge of what we do as SLPs?

This week we watched the Dog Bite Victim video and discussed it as a group. I didn’t really know what to expect with this type of title, but the video did not seem particularly compelling. A young boy was bitten by a dog, and was treated.

What was upsetting was the reaction of the physician to the nurse when she didn’t properly put her gloves on. The physician openly, and with audible frustration, belittled the nurse in front of the patient and his mother. I think this really bothered me because I have experienced this in my clinic experience at OU.

There is a supervisor who has openly yelled, belittled, and berated students in front of peers, patients, and their families. It is really an awful feeling. I came at this discussion keeping that in mind. I think bringing the nurse to the side in some way, and kindly letting her know why she needs to wear gloves, or just offer a friendly reminder—we all make little mistakes, I’m sure the nurse knows she 538

needs to wear gloves, it was a small mistake. It was interesting to hear Ch05’

point of view, being that he is the med student. I do feel as though I am learning

about the different professionals in my group and also understanding my role

within a team setting. I have learned and learned about the importance of IP

relationships, but looking back I will remember a face of a person in my group

and perhaps not be as reluctant to participate and contribute my expertise. The

burn victim video didn’t cause me extreme distress because I have been treated

mildly coldy by physicians before. Honestly, the majority of doctors I see aren’t

extremely personable—I don’t mean to stereotype, but they are in and out without

much room for conversation not about my body and symptoms. Sometimes I

think we put too much pressure on physicians to give us the right answer, the

answer we want, and also be a “best friend” while doing it. There are so many

other professions where this isn’t expected, it is just interesting the current

criticisms society is placing on physicians. To be frank, I wouldn’t want my

doctor touching all over my body and asking me how my life is, or, “how’s your

mom”. I’d rather get the exam over with and put my clothes back on.

Evidence-Based Practice

On evidence-based practice, PB reported that she had useful multiple search options; that she found interesting great articles; that she believed people began to trust technology:

This week, I thought that it was very useful to hear the woman from the library

show us a few useful search options to find evidence based practice sources. I had 539

no idea that “PubMed” even existed, or that it is available to students for free. I

had always used the “ArticlesPlus” option, but it is very useful to have multiple

options for search. I was able to find a lot of great articles about music therapy

studies that have been conducted right away. I used both of these resources when

looking for articles for the evidence based practice assignment and found a really

useful article about pain management and music therapy. The study is about

music therapy and its use in pain management with breast cancer patients. The

study found extreme success in decreased pain in the patients that received music

therapy. It was also really interesting to see what my group members came up

with. It was interesting to hear all of my group members’ opinions on the “When

Apps do a Health Professional’s Job” assignment. My immediate reaction after

reading the article was that the app could never do the job of the optometrist. I

thought that people would never trust the app like they would trust a real doctor.

Some of my group members actually agreed that the app could do the job of the

optometrist and that optometrists may need to look out for this. After hearing

their opinions, I realized that they are probably right. As the people get more

accustomed to using apps and technology I could actually see some people begin

to trust the technology. This is kind of a crazy thought and is really interesting to

me.

PC also reported that evidence-based practice was helpful, interesting and fun; however, she was not confident in duplicating her experience, and complained about how an app guide patients and their family for they were not trained: 540

For the EBP assignment, I found it extremely helpful. I would be interested to know exactly what the library pays, and to whom in order to allow us so access to all of the materials we are privy to, using our Ohio ID. The google-hangout this week with different members of each group was actually interesting and fun. It took us a while to get everything set up, and in fact one member was extremely more knowledgeable and “took the reins” which was really nice. However, I do not feel confident I could duplicate the experience that easily because I myself was not familiar with the ins and outs. We had a nice conversation about group dynamic, but ultimately decided we experienced it more during our undergraduate career. I only half agreed because I have experienced discontent among groups since I have been in graduate school. Lastly, I will comment on the assignment regarding the article, “When apps do a health professional’s job”. For my own profession, I do not foresee an app taking the place on one-on-one interaction between an SLP and a patient. How can an app guide the patient individually?

The thing that concerns me is all the blogs that are available on the internet for anyone to access; sure, it is great for parents to have tools to use at home, but at the end of the day they were not trained like we are and actually may not be doing the best thing for their child- i.e. they might “work” with them for a month, see no progress, then assume SLPs don’t know what they’re talking about.

541

Quality Improvement

On quality improvement, PB noted, some apps were interesting, cool, and useful; however, other apps were difficult, did not work, and did not have access to them. She stressed that experts shared good resources and had access to the links to these apps:

The infographic app assignment that we did in class was interesting. I was

supposed to look for apps that had to do with mission-control teams. I found a

couple apps that had to do with mission-control, but they both were still in the

process of being made. They seemed kind of cool though. I think that it could be

used in the clinical setting also. I thought that the tracking of air crafts was very

interesting and cool. If they can track air crafts from an app, then I am thinking

that they definitely be able to track an ambulance or life flight helicopter. The

group members in my group came up with some very good uses for the apps that

they found too. Creating a safety newsletter for my field was a little bit difficult

for me because there are not too many safety precautions that we take as music

therapy students yet. I found it easier to pretend that I was working with a private

practice that a few of my professors work at. I mainly focused on cleanliness of

instruments and the washing of hands as a safety precaution. I had a hard time

using a template on my computer and ended up making my own. Working on the

HIPAA apps didn't work very well for my group because both of our assigned

apps were very highly protected and we didn't have access to them yet. It was

kind of interesting to see that they were highly protected though, because that

makes me believe it might actually be HIPPA protected. Doing the Expert 542

Opinion 2 assignment with the people in my major was very useful because they

shared a few really good resources that they found and I shared some good

resources that I found. One that I really enjoyed was a TED talk that Anthony

found. It is good to have access to all of the links in one spot and I will definitely

look back on it while doing my research.

PC reported that quality improvement was challenging; she claimed that there was a huge risk of transferring germs at clinics; she suggested hand washing, disinfecting, and sanitizing the door knobs as safety measures:

The patient safety assignment was a little challenging because there was nothing

that jumped out at me in terms of safety. Once I read the example on blackboard,

I was able to think a little more outside the box. In our clinic, there is a huge risk

of transferring germs. There are so many little kids that come in and out each and

every day. Hand washing, and disinfecting is very important. Also, parents will

often bring their children in for their session even though they are sick. There are

all things to be aware of. Parents should know the risks when bringing their child

into a communal space such as the clinic. In Doctor’s offices there are always

reminders to wash your hands, etc. We should take more precautions to sanitize

the door knobs and different surfaces within the clinic to prevent transfer of

germs.

Utilize Informatics

On utilizing informatics, PB outlined her experiences. She reported informatics was interesting and fun; she learned how to use apps; and created videos. She was 543 surprised that some apps were free and easy to use. She had interesting discussion on different ways to remember names. She liked interviewing and interested in interviewing experts:

The Explain Everything app assignment was actually really interesting for me and

kind of fun! I had a good time figuring out how to use it in class, and then it was

pretty easy for me to complete the assignment at home. I did an informational

video on how to use the app "Skype." I could definitely see myself using this app

with the club that I am the president of. I could do informational videos on topics

of music therapy and send them out to the group. I am surprised that this app is

free and so easy to use. The Healthcare Informatics assignment was interesting

because there was not very much information about Ohio. We all also agreed that

it would have been nice to have some information that would help us compare the

statistics of Ohio with the statistics of other states. That way, we would be able to

see where Ohio stood. The social worker in my group, Katlyn, also pointed out

that it is pretty important to know what type of insurance the people with

insurance have. It was also interesting to discuss the different ways that we

remember peoples' names. Everyone in our group basically said that they don't do

anything specific to remember names. We joked a little bit that when we used to

do the name game type activities at school or something that those really helped.

One example of this was putting an adjective that starts with the same letter of

their name with their name, for example "Marvelous Matt." Later in the week, I

conducted my professional interview with my professor. I really liked going 544

through the whole process of setting up the interview and designing the questions.

A lot more prep work goes into setting up an interview than I would have thought.

It was really cool to hear the responses that my professor came up with for my

questions. I learned a lot about her background and her research. It was very

interesting to hear how she transitioned from student to professional and the

experiences she had along the way.

On module project presentation, PB indicated she was surprised at being selected to present; however, she was ready to present. She liked the presentation format:

Since I had worked on the module, I was not as involved with working on the

presentation, so I was really hoping that I would not get randomly chosen to

present… but of course…. I ended up getting picked. I just had a feeling that it

was going to be me right before our turn. Luckily I had practiced presenting with

my roommates, so I was prepared and I think that it ended up going okay. I think

that the randomizer element of presentation day actually added a little bit of a

level of excitement to the day and I think other professors should use that method.

I really liked the 20 seconds per slide presentation style as well. It helped the

flow of the presentation a lot and kept each group on time. Anthony and I are

actually going to use this style of presentation for our final practicum

presentations. I liked hearing about each groups’ modules. It seems as though

everyone survived our final projects, just like [instructor] said we would!

PC reported her general understanding of interacting with interdisciplinary teams: 545

Overall, I feel like I gained a general understanding of what it might be like to be

around many different professionals. I think the only way to truly learn how to

interact on an interdisciplinary team is to be out there doing it. I appreciated this

experience, and will remember it when I begin to work with other professionals.

On the standards, DA reported that audiology works in interdisciplinary teams; he claimed that working in teams will be an asset in future career; he hope to gain a better knowledge about other professions. He noted that he had no knowledge in quality improvement and informatics; was unaware of the dietician schooling. He claimed that he was knowledgeable in patient centered care and evidence-based practice:

I think it is great that audiology is involved in the IP team. Audiologists work a

good deal with speech pathologists, physical therapists, and medical doctors. I

am certain most of the professionals in the field today did not have a course in IP

teams. I am confident that working with other professions will be an asset in my

future career. Throughout this course, I hope to gain a better knowledge of the

nutrition and social work fields. In addition, how these fields would directly

relate to the audiology profession. Like in the lecture, if all professionals have a

better understanding of the healthcare field as a whole then we can refer and

provide recommendations. This will be helpful and beneficial for all parties,

meaning the patient along with the healthcare professional. Finally, I would like

to have an increased awareness on media pertinent to the healthcare field. Our

society will be continuing to grow in social media now and in the future.

Therefore, I want to become more comfortable using healthcare applications. I 546 am positive there will be more of this technology in the future and I am eager to continue learning about the newest advancements. The Core Competencies

PowerPoint demonstrated what ethics and values all professions agree upon. I felt it was a nice way to open the class, showing what the healthcare field shares in common. Before this PowerPoint, I had no prior knowledge regarding quality improvement and informatics. What I found really interesting is when [instructor] was discussing how important it is to know a little about each profession.

Furthermore this provides patient centered care because one can refer to other healthcare professionals. This can be advantageous to the patient along with the professional because making referrals to another physician can help with future patients. I thought it was beneficial to go over each professions certification board. It was interesting hearing about how evidence based practice was incorporated in each curriculum. All programs have somewhat of a different approach, but EBP is integrated in each profession whether it is classes or research. Something I learned about while discussing this topic was the subdivisions for the medical program. What was interesting was that although the two had differing curriculums, they both scored similar on the board’s examination. I think this is a good option and opportunity for those that have already had experience working in the field. Another interesting fact I found out was the difference between a nutritionist and dietician. I was unaware that the dietician had more schooling. 547

Similarly, on the standards, DB reported that she liked working with other professions, but did not work with audiology and medical students in class. She enjoyed everyone’s perspective on evidence-based practice. She noted that she never extensively used an iPad before; and that an opportunity had been created for her to learn how to use iPad extensively:

I always like meeting students in other professions, because even though we are in

different disciplines, we all have the healthcare interest in common. It is nice to

have an audiology student and two medical students in class, because we did not

have either during the summer. I enjoyed hearing everyone’s perspectives on

evidence-based practice in their field. The lecture was easy to follow and

informative. Reading the PowerPoint before class helped me follow along. There

are no things that I can say I do not like about the class. If I had to say anything it

would be using new apps. However, I do not dislike it – it is simply different and

will take getting used to. I have never extensively used an iPad before, so this

will provide a good opportunity to learn how.

Furthermore, on the standard, DC expressed her feelings of being intimidated. She claimed never been in a class with everyone except her own undergraduate nursing students; she did not have many IP team experiences and no knowledge about other professions’ scope of practice; she wanted a wonderful change. She was surprised to be given an iPad. She reported that discussion really helped her to understand the standards, and enjoyed the lecture. Overall, she was extremely excited and ready to work together to ultimately benefit patient care: 548

After the first lecture was over I felt a little intimidated by the course itself. I never expected to be sitting in a class with all graduate students except my fellow nursing students and myself. Having never been in a class with anyone except undergraduate nursing students it was quite a change from what I’m used to. What worried me most was that the other students won’t be able take me seriously because I am still an undergraduate student. Being a student in the hospital setting sometimes I run into the problem that others do not considering myself as part of the healthcare team because I am are still learning. I don’t want this problem to be an issue throughout this course. However, if this is not an issue and the other healthcare professionals take me and the other nursing students seriously it will be a wonderful change to be a working member of a true healthcare team. I was surprised to be given an iPad during the first class period.

Having worked in several hospitals with computer charting you still don’t see many iPads in the hospital setting. Some of the residents at the hospitals do carry them but for the most part everyone uses COW’s, also known as computers on wheels. Bringing an iPad into our inter-professional course seems like a great idea. The iPad already has so many wonderful apps loaded onto it. These apps definitely seem like they will benefit each of us with our own area of study and ultimately help with our group collaboration throughout this course. After the second class I have settled into more of a routine for what to expect. The first class was slightly nerve wracking with most of the students being graduate students. The second class I think went very well and I was definitely more 549 comfortable. I really liked that each area of study went and discussed what they do and how they have gotten to that point. Even though I spend a lot of time in the hospital setting I haven’t had many experiences being around other healthcare professions. Because of this I wasn’t very educated on what everyone’s scope of practice was. I also enjoyed the student participation that occurred during the lecture. The discussions really helped me to understand the points that were being talked about and allowed me to connect the topics to each area of study that is represented in our class. Overall, I am extremely excited for this course and can’t wait to see how all of us can work together to ultimately benefit patient care.

550

Appendix L. Qualitative Data

The analysis of each case and across six cases yielded some useful quotes related to their self-concepts in the IOM standards. Table 96 presents the demographic data of the selected HSP students for the two instructional groups.

Table 96

Demographic DATA for the Selected HSP Students, by Two Instructional Types Demographic Participatory Group Direct Group (Post) Age PA: 21 DA: 24 PB: 21 DB: 23 PC: 23 DC: 22 Gender PA: Male DA: Female PB: Female DB: Male PC: Female DC: Female Status PA: Undergraduate DA: Graduate PB: Undergraduate DB: Undergraduate PC: Graduate DC: Undergraduate Major PA: Music Therapy DA: Audiology PB: Music Therapy DB: Nursing PC: Speech-Language DC: Nursing Therapy Interprofessions Team PA: Ohioans (Team 10) DA: Health Avenger (Team 18) PB: Clinical CR3W (Team DB: Health Avenger (Team 18) 11) PC: Ohioans (Team 10) DC: Interdisciplinary Dreams (Team 20) Team Preference PA: Working alone DA: Working in teams PB: Working in teams DB: Working alone PC: Working alone DC: Working in teams

551

Table 97

Survey Responses of Selected Cases, by Demographics, Instructional Type, Standards,

Initial Perceived Achievement Scores, Final Perceived Achievement Scores, and Change

Scores

Variable PA PB PC DA DB DC tpref 2 1 2 1 2 1

Rp 7 2 5 5 7 5

Rp1 6 6 5 6 5 5

Rtw 3 4 5 5 5 4

Rtw1 7 7 5 6 4 5

Re 7 3 6 6 7 4

Re1 7 7 5 6 6 5

Rq 6 1 1 4 1 5

Rq1 5 5 5 5 1 4

Rinf 4 1 1 3 1 1

Rinf1 5 6 2 5 4 2 ipAch 27 11 18 23 21 19 fpAch 30 31 22 28 20 21 cpAch 3 20 4 5 -1 2

Note. Rp = patient-centered care, Rtw = interdisciplinary teamwork, Re = evidence-based practice, Rq = quality improvement, Rinf = informatics; ipAch = initial perceived achievement, fpAch = final perceived achievement, cpAch = change in overall perceived achievement; Rp1 = patient-centered care, Rtw1 = interdisciplinary teamwork, Re1 = evidence-based practice, Rq1 = quality improvement, Rinf1 = informatics 552

Table 98

Codes, Coding, and Keywords

Var Themeing the Data Elaborative Coding

Rp Diabetes, antibiotic conflict, Liked (60), really (95), thinking (76), interesting

aurasma, patient provider, (37), feeling (44), need (11), because (61), happy

cranial nerves, mobile appa to (4), excited (8), hope (6), challenge (6), important

inform patients, AAC, (8), helpful (10), help (27), useful (9), believe (5),

Rtw Dog bite, Burn case, Expert benefit (6), surprise (7), impressed (2), difficult (9),

opinion, online collaborative wish (1), explain (12), aware (3), wonder (4),

tools, online planning tools, feedback (4).

professional branding,

LinkedIn, Twitter, Major swap

Re EBP, Problem Solving, When

apps do a health professionals

job, medical controversy,

PubMed, google-hangout

Rq Infographic, HIPAA app,

Expert opinion2, creating a

safety newsletter, patient safety

Rinf Explain Everything apps,

Healthcare Informatics, Photos,

Professional interview

553

Table 99

What Problems Would You Anticipate Working in Teams in this Class?

Participatory Group (Pre) Participatory Group Direct Group (Pre) Direct Group (Post) (Post) PA: Everyone pulling his PA: Sometimes DA: Some problems DA: Trying to or her own part of work. some group could be having find the best members do not pull differing opinions time to meet as their weight and based on one’s field. there are everyone else has to Also, figuring out a differing pick up the slack. time for meeting with schedules. the inter-professional groups. PB: I think that it may be PB: Sometimes it DB: Conflicting DB: Different difficult to find times to was hard to find a schedules. schedules meet together because I time to meet as a know that we are all busy group that we were and at least one of my all available group members does not live in Athens on the weekends.

554

Table 99 (continued)

Participatory Participatory Group (Post) Direct Group (Pre) Direct Group (Post) Group (Pre) PC: PC: The question above is DC: Conflicting DC: So far we have Availability to hard to answer--I enjoy class, work, or not worked in teams meet with each working in a team and studying times if yet in this course. other, we're all working alone. It depends on needing to meet Some problems that fairly busy. the project, and it also outside of class and could potentially depends on the group disagreements among arise are: others not members. I have had too team members if pulling their weight much varied experience to individuals cannot with a topic, say what I prefer. In a group, come to a conflicts with times it is hard to have everyone compromise. if we need to meet equally participate. It outside of class, and becomes hard when people differing opinions. drag their feet or don't respond to emails.

555

Table 100

What Problems Would You Anticipate Working Alone in This Class?

Participatory Group Participatory Group Direct Group (Pre) Direct Group (Post) (Pre) (Post) PA: Being PA: Becoming DA: Figuring out DA: Not having more overwhelmed. overwhelmed with all the apps ideas such as other the amount of work available and professionals background. and sometimes working with having to have media. someone else help or ask a friend for clarification. PB: I am not the PB: Sometimes it DB: Not having DB: Not having others' most technological was hard to submit the input of other ideas to work off of. person, so figuring assignments disciplines to add out some of the especially when they to my work. apps may be were video difficult for me on assignments. my own. PC: Being left to PC: Being unsure of DC: Only being DC: So far we have not fend for myself in what was to be able to reflect on had to work alone in this fields I'm unfamiliar turned in, and your own opinion class. Working alone with when working missing assignments. and difficulty could create problems on case studies. finishing on time if such as lack of opinions a lot of work or or ideas, and time research needs to constraints if others be done in a short cannot help you with amount of time research or putting something together.

556

Table 101

What Benefits Would You Anticipate Working in a Team in This Class?

Participatory Group Participatory Group Direct Group (Pre) Direct Group (Post) (Pre) (Post) PA: Creating new PA: Different view DA: Learning more DA: Other professional point and approaches about other healthcare professionals bringing connections. to the same problem. fields and bringing their ideas to the ones skills from table. differing fields to the table. PB: I think that I PB: We worked DB: Multiple DB: Learning about will benefit from really well together perspectives and other professions working as a team and were able to help brainstorming of good which will help my because I will get to each other out in ideas. inter-professional know a lot about areas that some skill. their professions and people were better be able to use them than others. as resources. PC: Knowing each PC: Getting to know DC: Being able to DC: In the past some professions scope of people from other collaborate and take of the benefits I have practice better, feel majors and fields of everyone's ideas into encountered with more comfortable study. consideration when team work include, when making coming up with a different ideas and referrals, more solution. opinions, and being comprehensive able to split a large treatment and amount of work up to assessment accomplish the final outcomes. product sooner.

557

Table 102

What Benefits Would You Anticipate Working Alone in This Class?

Participatory Group (Pre) Participatory Direct Group Direct Group (Post) Group (Post) (Pre) PA: Not have to worry PA: You know DA: Learning DA: Completing projects about scheduling conflicts. how everything about apps that on my own schedule. will turn out and could be that you earned beneficial for my the grade by field and ones I yourself. can use in the future. PB: The only benefit that PB: I didn't have DB: Flexibility DB: Being able to work I can see from working by to leave my of my own on my own schedule. myself would be not apartment. schedule and having to plan around other being able to people's schedules. apply the ideas I think are best. PC: Perhaps bringing PC: Getting DC: Being able DC: So far we have not your individual "plan of things done more to hold yourself had to work alone in this attack" to your group then quickly. accountable for course but there are explaining why you chose your own grade benefits of working by that route. However, it and the time you yourself. Some of these seems counterintuitive to spend working on benefits are that you don't see a benefit in solo work something. have to rely on others, in an inter-professional only yourself, being able class. to work at your own pace, and not having any conflicting ideas.

558

Post perceptions of other disciplines. Given each disciplines what do you think of … (For example, what words, phrases. Or descriptions come to your mind).

Table 103

Social Works

Participatory Group Participatory Group Direct Group (Pre) Direct Group (Post) (Pre) (Post) PA (Music Therapy): PA: Advocacy, DA (Audiology): DA: Helping Child services, abuse, under- Working in multiple others who have advocacy. privileged. settings such as behavioral or hospitals, schools, personality nursing homes, and disorders. Also prisons. Trying to helping advocate for advocate for the the patient as to patient’s rights and hospital release. their feelings. PB (Music Therapy): PB: Social workers DB (Nursing): Tend DB: Profession that Working with struggling are very important. to the social and dives into the social families, children in the They help people mental situations aspect of different foster care system, and with the necessities that affect people. cases. Good for ex-convicts. of life if they can't do They can be great providing resources. it for themselves. but I also know many social workers become easily callused.

559

Table 103 (continued)

Participatory Group Participatory Group Direct Group (Pre) Direct Group (Post) (Pre) (Post) PC (Speech-Language PC: Advocate for DC (Nursing): DC: Person Pathology): Advocate and allow Foster children, centered, therapy, for children, adults, and disadvantaged adoption, working caseworker, families. populations become with families diagnosing, treating, the best they can be, mental health, and understand that behavioral health, they are doing the advocating, child best they can with protective services, what they have. administrative, support, solving personal and/or family problems.

560

Table 104

Nursing

Participatory Participatory Direct Group (Pre) Direct Group (Post)

Group (Pre) Group (Post)

PA(Music PA: Bedside, DA (Audiology): Can DA: Caring for

Therapy): procedures, first- also work in a variety of patients while in the

Needles, hospital. hand. settings. Caring for those nursing home or

who are ill. Administering hospital. Distributing

medications along with medications, pulling

communicating with blood, and

doctors and other nurses to communicating with

give the best care. the doctor

PB(Music PB: Nurses are DB (Nursing): The core DB: The core of health

Therapy): Caring kind of the of healthcare. Nurses care and the patient for people in essential part provide essential care for advocates. hospitals, private behind medical the patients and advocate practices, nursing facilities running for them. Nurses know the homes and efficiently. patients better than the rehabilitation doctors, usually. clinics.

561

Table 104 (continued)

Participatory Group Participatory Group Direct Group Direct Group (Post)

(Pre) (Post) (Pre)

PC (Speech-Language PC: Heart of many DC (Nursing): DC: High quality

Pathology): Primary hospitals and clinics. Patient healthcare, patient focused caregiver to patient in Nurturing, interpret advocating, care, education, inpatient and lab results, check patient centered medication administration, outpatient setting, vital signs, may help care, teamwork, assessment, collaboration nurturing, comforting, to answer questions hospitals, with other healthcare obtains and monitors a family or patient healthcare, professionals, patient vitals, makes note of may have. patient advocate, preventative changes in patient's education. care, LPN, RN, STNA, state. CNP, CRNA

562

Table 105

Dietician

Participatory Group Participatory Direct Group (Pre) Direct Group (Post) (Pre) Group (Post) PA(Music Therapy): DA (Audiology): Food DA: Food, diet, food Diet. pyramid, several small group, balanced meals. meals a day, balanced meals PB(Music Therapy): DB (Nursing): Focuses on DB: Profession for Working in a private dietary aspects of providing dietary practice to put healthcare. Unfortunately, counseling and guidelines. people who are it is probably frustrating overweight on a diet because many people do plan or making a not listen to dieticians, specific diet for especially if they did not athletes. go to them independently. PC (Speech- DC (Nursing): Health and DC: Registered dietitians, Language wellness, eating healthy, healthy lifestyle, improved Pathology): Meal my plate, nutrition, health, education, healthy plan, gain/lose balance, vitamins, eating, obesity, specialized weight, helps supplements, health and diets, planning, evaluation, patients on modified wellness. and implementation of diet find foods they food, nutritional facts, my may enjoy. plate, portions

563

Table 106

Physical Therapy

Participatory Group Participatory Direct Group (Pre) Direct Group (Post) (Pre) Group (Post) PA(Music Therapy): PA: Balance, DA (Audiology): DA: Muscle activation, Stretches. stretches, broken Muscles, balance therapy, fitness, bones, mobility. system exercises PB(Music Therapy): PB: Physical DB (Nursing): Critical DB: Profession for Working with people therapy is very for rebuilding physical rehabilitating patients with disabilities, the important in deficits from injury, regarding their physical elderly, or people many settings condition, etc. I feel wellness. who have especially the that physical therapists experienced an injury rehabilitation play a large role in the to make them setting. They success of many physically healthy work with a procedures, because again, or simply able wide variety of they are the ones who to function in their populations that help patients regain daily lives. I did not realize abilities, physically. PC (Speech- PC: Help to DC (Nursing) Injuries, DC: Exercises, Language Pathology): maintain current rehabilitation, activities rehabilitation, Improve or maintain function and of daily living orthopedics, reduce function/mobility in mobility, and pain, restore function, extremities and joints, help to range of motion, or movements like rehabilitate to strength, balance, walking, train use of former state, or neurological disorders, residual function. help to use develop fitness and residual strength wellness programs, nursing homes, education, patience, sports injuries

564

Table 107

Speech-Language Pathology

Participatory Group (Pre) Participatory Group Direct Group (Pre) Direct Group (Post) (Post) PA(Music Therapy): PA: Swallowing DA (Audiology): DA: Speech, Stutter, dysphagia. disorders, speech, Speech, aphasia, articulation, stuttering. therapy approach, swallowing, autism, multiple aphasia settings (school, hospital, nursing homes) PB(Music Therapy): PB: Speech- DB (Nursing): DB: Profession Working with children Language More important than dealing with with articulation disorders Pathologists also most people think. I speech and in the school, the elderly work with a wide know they can language. Helpful with aphasia and range of populations. obviously help teach for everything dysphasia, people with They work with speech and provide from therapy for brain injuries, and people phonation and therapy for speech speech with other disorders. articulation along deficits. However, impediments to with swallowing they can play crucial guidance for disorders and things roles in preventing swallowing. of that sort. choking hazards in the hospital. PC (Speech-Language PC: Prevent, DC (Nursing): DC: Swallowing Pathology): Assess, evaluate, diagnose, Speech, speech studies, speech diagnose, and treat and treat disorders rehabilitation, and language disorders of language, involving all aspects swallowing. disorders, feeding articulation, dysphasia, of communication, disorders, voice voice, fluency in school, swallowing, and disorders, clef home health, executive functioning lip/palate, hospital/medical, skills related to stroke/TBI, acute/long term language disorders. language skills, rehabilitation facility, VA, articulation, etc. Educate professionals aphasia, delayed within work facility about language field to promote disorders preventative services.

565

Table 108

Medicine

Participatory Group Participatory Direct Group (Pre) Direct Group (Post) (Pre) Group (Post) PA(Music Therapy): PA: Doctor, DA (Audiology): Healing DA: Diagnosing, Doctor, hospital. diagnosis, leader. and treatment, prognosis, treatment, doctor, medication medication PB(Music Therapy): PB: Doctors are DB (Nursing): The most DB: The basis of Healing people with important respected out of all western medicine. all different types of because they have healthcare disciplines. disorders and the ability to However, they are only able illnesses. Working diagnose and to do their job with the help with people of all prescribe of other disciplines. They ages. Advising medicine. are not miracle workers, but other medical Doctors are the they are a part of a team. professionals as to decision makers. Nonetheless, it is what to do. challenging to be a doctor of medicine.

PC (Speech- PC: Primary DC (Nursing): Doctors, DC: Holistic Language diagnosis, treat, MD, DO, healthcare, medicine, allopathic, Pathology): surgery, prescribe pharmaceuticals, hospitals, osteopathic, Diagnose medications. patient education, structure, function, underlying problem, healthcare professionals, manipulation, MD, correct physical patient advocacy DO, residency, abnormality, diagnose, treat, diagnose organic prescribe, syndrome, refer to preventative care, specialized specialties, surgery professional.

566

Table 109

Music Therapy

Participatory Group (Pre) Participatory Direct Direct

Group (Post) Group Group

(Pre) (Post)

PA(Music Therapy): Music, guitar, goals.

PB(Music Therapy): This is my field, so I know a lot about it. We work with people from all different populations in many different settings to improve on non-musical goals with the use of music as a therapeutic tool.

PC (Speech-Language Pathology): Alternative forms of therapy to reach persons in a different way, I think of Autism when I hear this profession.

567

Table 110

Audiology

Participatory Participatory Direct Group (Pre) Direct Group (Post)

Group (Pre) Group (Post)

DA (Audiology): Hearing DA: Hearing aids, cochlear

loss, audiologist, hearing implants, vestibular testing,

aids, cochlear implants, hearing evaluations, auditory

vestibular system, auditory processing disorders

processing

DB (Nursing): Works with DB: Still not sure but

hearing. something with the study of

hearing.

DC (Nursing): Hearing, DC: Hearing tests, cochlear

hearing aids, deafness, implants, hearing aids,

communication impaired hearing, sound,

balance, communication, ear

canal, hearing rehabilitation

568

Appendix M. Testing for Assumptions

Table 111

Homogeneity of Variances and Equality of Means, by IP Teams

Variable Method Levene df1 df2 p Welch df1 df2 p ipAch Part 0.92 8 31 .51 0.96 8 11.18 .51

Direct 1.41 8 41 .22 2.84 8 16.29 .04 fpAch Part 2.64 8 31 .03 0.74 8 11.44 .66

Direct 0.89 8 41 .53 2.21 8 16.22 .08

Table 112

Instructional Types and their Inter-Professional Teams

Teams Participatory (N = 40) Direct (N = 50) A Knowteck (5) Health Avengers (6) B The Ohioans (6) Health Crusaders (6) C The Clinical Crew (5) Interdisciplinary Dreams (7) D Interprof Inquires (4) Slow Loris (6) E Fab5 (3) Temporary Townies (6) F 3’s Company (4) Code Blue (5) G Group 1 (4) Fab Five (4) H MIJEL (5) We Can Fix That (5) K OG Bobcat (3) Teamwork Makes the Dream (5)

569

Appendix N. Original and Revised Topic and Research Questions

Original Topic

Comparing Participatory Learning Instruction and Direct Instruction

of Interdisciplinary Health Science Professions Students’ Knowledge Achievement

in a Group Module Project

Original Research Questions

Research question 1. Do HSP students who are in participatory instructional group have greater gains in self-reported knowledge achievement scores compare to the direct instructional group?

Research question 2. How do the HSP students’ journal reflections help explain their self-reported knowledge achievement scores on a group module project?

Research question 3. What instructional strategy provides significant instructional impact on HSP students self-reported knowledge achievement in regard to meeting the IOM standards on a group module project?

Research question 4. How does the participatory instruction of IOM standards on a group module project affect students’ self-reported knowledge achievement mean scores in their majors?

Research question 5. How does the direct instruction of IOM standards in a group module project affect students’ self-reported knowledge achievement mean scores in their majors?

Research question 6. How do HSP students feel about working in teams on a group module project with regard to participatory and direct instructional types? 570

Revised Topic

Comparing Participatory Learning Instruction and Direct Instruction

of Interdisciplinary Health Sciences and Professions Students’ Perceived Achievement

in a Group Module Project

Revised Research Questions

Research question 1(now Q1). Do Health Science and Professions students who are in participatory instructional group have greater gains in perceived achievement scores compare to the direct instructional group?

Research question 2 (now Q10). How do the HSP students’ journal reflections help explain their self-reported knowledge achievement scores on a group module project?

Research question 3 (now Q5). What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their majors with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Research question 4 (now Q3). How does the participatory instruction of IOM standards on a group module project affect students’ final perceived achievement mean scores in their majors, controlling for their initial perceived achievement scores?

Research question 5 (now Q4). How does the direct instruction of IOM standards in a group module project affect students’ final perceived achievement mean scores in their majors, controlling for their initial perceived achievement scores? 571

Research question 6 (now Q2). How do HSP students feel about team preference on a group module project with regard to participatory and direct instructional types?

New Questions Added

Research question 6. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their team preferences with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

Research question 7. How does a participatory instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their

IP teams, controlling for their initial perceived achievement scores?

Research question 8. How does the direct instruction of IOM standards on a group module project affect HSP students’ final perceived achievement scores in their IP teams, controlling for their initial perceived achievement scores?

Research question 9. What instructional strategy provides significant instructional impact on HSP students’ final perceived achievement scores in their IP teams with regard to meeting the IOM standards on a group module project, controlling for their initial perceived achievement scores?

572

Appendix O. Funding Sources for the HSP Program

Table 113

Sources of Project Funding, Amount and Year for the Program

Title Source Amount Year

(%match)

Technology Augmented Team- Ohio University 1804 $21,163 2012-

Building in Inter-Professional Fund 2013

Health Science Graduate

Education.

Health-Care Access Initiative #1- Centers for Medicare and $864,846.34 2012-

Inter-Professional Health Teams Medicaid Services via (Includes 51% 2013

Project Ohio State University match)

Health-Care Access Initiative #1- Centers for Medicare and $2,050,703 2013-

Inter-Professional Health Teams Medicaid Services via (Includes 51% 2015

Project Ohio State University match)

Note. ODM Federal Funding: G-1415-07-0060; ODM201409

Here is a link to the funding program: http://grc.osu.edu/medicaidpartnerships/healthcareaccess/

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

Thesis and Dissertation Services ! !