<<

University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Public Access Theses and Dissertations from the Education and Human Sciences, College of (CEHS) College of Education and Human Sciences

8-2016 Guidelines for Interpretive Fidelity in Mixed Methods within the Context of a Randomized Controlled Trial Amanda L. Garrett University of Nebraska - Lincoln, [email protected]

Follow this and additional works at: http://digitalcommons.unl.edu/cehsdiss Part of the Educational Methods Commons, Educational Psychology Commons, and the Medicine and Health Sciences Commons

Garrett, Amanda L., "Guidelines for Interpretive Interview Fidelity in Mixed Methods Research within the Context of a Randomized Controlled Trial" (2016). Public Access Theses and Dissertations from the College of Education and Human Sciences. 276. http://digitalcommons.unl.edu/cehsdiss/276

This Article is brought to you for free and open access by the Education and Human Sciences, College of (CEHS) at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Public Access Theses and Dissertations from the College of Education and Human Sciences by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. GUIDELINES FOR INTERPRETIVE INTERVIEW FIDELITY IN MIXED METHODS

RESEARCH WITHIN THE CONTEXT OF A RANDOMIZED CONTROLLED TRIAL

by

Amanda L. Garrett

A DISSERTATION

Presented to the Faculty of

The Graduate College at the University of Nebraska

In Partial Fulfillment of Requirements

For the Degree of Doctor of Philosophy

Major: Interdepartmental Area of Psychological Studies in Education

(Quantitative, Qualitative, and Psychometric Methods)

Under the Supervision of Professors Susan M. Swearer & Wayne A. Babchuk

Lincoln, Nebraska

August, 2016 GUIDELINES FOR INTERPRETIVE INTERVIEW FIDELITY IN MIXED METHODS

RESEARCH WITHIN THE CONTEXT OF A RANDOMIZED CONTROLLED TRIAL

Amanda L. Garrett, Ph.D.

University of Nebraska, 2016

Co-Advisors: Susan M. Swearer & Wayne A. Babchuk

Interviews fascinate and capture individuals’ attention. Researchers value the data they glean from interviews, while participants enjoy being asked to share their voices and opinions. Some of the most complex, stringent research designs are now being revised to include interviews, such as randomized controlled trials. But, how do we know that the interviews that are conducted are valid? We need to know more about how interviews are developed and delivered within the context of intervention research. Therefore, the aim of this methodological dissertation is to create a set of recommendations for interpretive interviews in a mixed methods randomized controlled trial. This dissertation research is part of a larger NIH-funded longitudinal research project on exercise adherence. Through qualitative analysis, dialectical pluralism of research paradigms, and literature on treatment fidelity and validity, the interview fidelity process emerged. Findings indicated five interview fidelity ideals: (1) research contributions, (2) interviewer-participant association, (3) participant accommodation, (4) process and procedures, and (5) data management dimensions. Implications for various research audiences are discussed.

Outcomes will assist researchers in processing interviews to encourage and increase validity within the context of intervention trial mixed methods studies and the broader base of all mixed methods studies utilizing interviews.

iii

Acknowledgements

I acknowledge my committee members who served so that I could graduate:

I thank Susan Swearer for taking on the assignment of my co-chair after the abrupt departure of my last advisor even though she is outside of my program. I appreciate

Wayne Babchuk for his guidance being my co-chair from the QQPM program. I recognize Charles Ansorge for reading during his retirement as an emeritus faculty.

I express gratitude for Tony Albano for reading in the eleventh hour. I thank Bunny

Pozehl for permitting her grant to be part of this project as well as her participation.

Thanks to Patti in the EDPS office for always being there for me. I appreciate all families who created scholarship funds and selected me, signifying belief in my abilities. I also found strength in remembering the many colleagues who passed through the program and Office of Qualitative and Mixed Methods Research before me and lifted me up.

I thank my family for believing in me so that I could accomplish this. I wish for my son Konnor to walk his own path.

iv

TABLE OF CONTENTS

CHAPTER 1: Introduction ...... 1 Problem Statement ...... 1 Purpose ...... 3 Research Questions ...... 3 Audiences Who Will Benefit from Study ...... 4 Researcher Reflexivity ...... 4

CHAPTER 2: ...... 6 Conceptual Framework ...... 6 Randomized Controlled Trials ...... 6 Hearts on Track Study ...... 7 Treatment Fidelity Framework ...... 8 Interview Fidelity Framework ...... 10 Interviews ...... 11 History of Interviews ...... 12 Typology ...... 13 Interpretive Interviews...... 13 Health Sciences ...... 14 Mixed Methods ...... 15 Longitudinal ...... 15 Question Formation ...... 15 Methods ...... 16 Site and Participants ...... 16 Recording and note taking ...... 18 Implementation ...... 19 Interviewer Skills...... 21 Research Paradigms ...... 22 History of Paradigms ...... 23 Paradigms Defined ...... 24 Paradigm Wars ...... 25 Paradigm Peace ...... 26 Paradigm Relevance ...... 28 Inquiry Paradigms ...... 30

v

Positivism ...... 30 ...... 32 Constructionism/Constructivism/Interpretivism ...... 32 Postmodernism/Post-structuralism...... 34 Transformative ...... 35 ...... 35 Dialetical Stance ...... 36 Mental Model ...... 37 Team Stance ...... 38 Personal Stance ...... 40 Research Validity Context ...... 41 Context ...... 42 Context ...... 43 Mixed Methods Research Context...... 52 Validity Summary ...... 53 Chapter Summary ...... 54 Bridging the Literature ...... 54

CHAPTER 3: Methods ...... 56 ...... 56 Site & Participants ...... 58 Data Collection ...... 60 Data Analysis ...... 62 Validity ...... 64 Chapter Summary ...... 64

CHAPTER 4: Results ...... 66 Format ...... 66 Interview Fidelity Ideals ...... 66 Research Contributions ...... 67 Interviewer-Participant Association ...... 68 Participant Accommodation ...... 71 Process & Procedures ...... 72 Data Management Dimensions ...... 76

vi

Philosophical and Validity Contexts...... 77 Interview Fidelity Framework ...... 85

CHAPTER 5: Discussion & Conclusions ...... 87 Summary of Major Findings...... 87 Validity Contributions of Interview Fidelity Ideals ...... 87 Findings Related to Literature ...... 90 Limitations...... 92 Implications ...... 92 References: ...... 94 vii

List of Tables

Table 3.1 Data Sources Table ...... 60 Table 4.1 Interview Fidelity Process ...... 86 viii

List of Appendices

Appendix A: Interview Fidelity Ideal Matrix Appendix B: Code List Appendix C: Interview Fidelity Ideals Summary Appendix D: Interview Manual Appendix E: Booster Session Materials Appendix F: Interview Guide

1

CHAPTER 1: Introduction

Problem Statement

We live in an interview society, which values interviewing as a primary mode of information gathering and sharing (Atkinson & Silverman, 1997). Interviews are always around us—in newspapers, on television, and in the infinite repository of all, the Internet.

Interviews cut across many fields (Gubrium & Holstein, 2002; King & Horrocks, 2010;

Patton, 2015) and journalists communicate with a wide variety of professionals across many disciplines (e.g. political and world leaders, scientists, financial and technology gurus, health professionals) to report the latest news and current events. Interviewers also extend into the entertainment realm, corresponding with willing or unwilling celebrities, athletes, and controversial figures to bring us the latest gossip from the rich, famous, infamous, and those basking in the spotlight for their moments of fame.

The conversational nature of interview encounters makes them a comfortable forum for discussion. Not surprisingly, many individuals have firsthand experience, having taken part in a variety of different types of interviews over the course of their lives: applying for jobs, participating in national polls, attending parent-teacher conferences, getting medical examinations, meeting with financial advisors, or even speed dating.

Interviews are a popular and important research method. This tool is the dominant data collection strategy for qualitative inquiry (Merriam & Tisdale, 2016; Sandelowski,

2002). Interviews are a versatile research tool crossing all research paradigms, theoretical perspectives, and (Trainor, 2013). In research conducted with a subjectivist epistemology, interviews are often considered to be the gold standard 2

(Barbour, 2008). In the research literature, interviews are ubiquitous tools. Interviews are as likely to be used as correlational investigations as case studies in mixed methods research.

Interview methods themselves can take different forms depending on their use, ranging from highly structured to semistructured to very loosely formatted (Bernard,

2011; Kelly, 2013; Merriam & Tisdell, 2016; Roulston, 2010; Rubin & Rubin, 2005;

Silverman, 2006). Interviews have provided investigators with important data across many academic fields, advancing our understanding and providing essential theoretical frameworks. The research benefits of conducting interviews are plenty, particularly if done by an experienced interviewer (Kvale & Brinkmann, 2009). Because interviews are such a central research method, it is vital to conduct them with integrity. However, the validity of interviews has not received sufficient attention in the research literature.

The use of interviews in both mixed methods studies and randomized controlled trials (RCTs) is widespread. Many fields have utilized interviews, particularly interpretive interviews, to illuminate notable research findings and theories (Kvale &

Brinkmann, 2009; Merton & Kendall, 1946). The richness and depth as well as the subtlety and nuance gleaned from a participant’s story during the interview and analysis demonstrate the importance of this research method.

Interviews also provide key information in mixed methods approaches. As mixed methods continues its expansion across fields, researchers require tools to understand and integrate interviews as a valuable source of data in this framework. Methodologists have called for a modification of methods or tools from a monomethod research conceptualization (usually associated with either qualitative or quantitative data) to a 3 mixed methods framework (Hesse-Biber & Johnson, 2013; Johnson, McGowan, &

Turner, 2010). Interviews span research methodologies but remain complex in their conduct. Little has been written on the topic of interviews within the realm of mixed methods research (Morse, 2012). What exists in the literature focuses generally on interview types and mixed methods research designs, not on the practical level of how to carry out the interviews with fidelity as a part of a mixed methods study (Zohrabi, 2013).

Methodologists have called for modifying methods and bringing them into the mixed methods framework as well as increased validation of interviews within the mixed methods context (Hesse-Biber & Johnson, 2013; Johnson & Turner, 2003). Responding to the call and offering some guidance for skillfully conducting quality interpretive interviews within a mixed methods randomized controlled trial framework is the aim of this research.

Purpose and Research Questions

The purpose of this dissertation is to advance interview fidelity ideals as a means for conducting interpretive interviews within the context of a mixed methods study.

These fidelity constructs were created from a larger project, a longitudinal RCT study of exercise adherence among heart failure patients. Research questions guiding this study about interview fidelity include the following:

1. What are the interview fidelity ideals for Hearts on Track study?

2. How can these interview fidelity constructs be operationalized across multiple

paradigms (e.g. postpositivism and constructionism)?

3. How did this researcher navigate between two philosophical perspectives to arrive

at a negotiated path for validity or interview fidelity? 4

4. What is added to the concept of interview fidelity from treatment fidelity

standards?

5. What is added to the concept of interview fidelity from the validity literature?

By taking the time to reflect back on the interview process, the researcher (A.

Garrett) has a unique opportunity to more critically reflect on the process to assist other researchers and methodologists in achieving interview fidelity.

Audiences Who Will Benefit from Study

Methodological discussion of the fidelity of interviews uplifts the quality of research by providing both researchers and methodologists with an awareness of the ramifications of the decisions made and enacted (or not) during the course of a study.

Those working to advance the utility of research as well as those aspiring to improve the quality of content and findings in their research such as those in the health sciences or education, gain from increased thinking about interview fidelity. Understanding the types of considerations and decisions in conducting interviews will help researchers, novice through advanced, glean more authentic data when planning and embarking on new projects. Additionally, by engaging in this research, potential solutions may be available for those already in the midst of an existing project. The opportunity for self-appraisal and continuous improvement is available by reviewing the participant’s progress during and after a research study.

Researcher Reflexivity

My education and experience enables me to offer unique insights into the issue of interview fidelity. As a graduate student at the University of Nebraska-Lincoln in the

Educational Psychology Department with specialization in Quantitative, Qualitative, and 5

Psychometric Methods, I have undergone extensive training and coursework to thoroughly understand approaches to research such as clinical trials and interviews. A focus on validity and quality has always been important to me as a researcher and was a reason I came to study research methods. I have many experiences from which to draw to apply to this work. I have directed and worked in the Office of Qualitative & Mixed

Methods Research (1 year, 8 years respectively), worked for the Bureau of Sociological

Research and Sociological/Behavioral Sciences Research Consortium (3 years), and consulted for the University of Nebraska Medical Center (4 years).

I consider my personal characteristics and experiences as they relate to the phenomena under study. When I think about who I am as a person and how I relate to others, I quickly come upon my introversion trait. I am one to take in the environment carefully and speak rarely and usually after some time. Personally, I have found my introversion to be a positive attribute when it comes to interviewing. To enjoy listening much more than speaking is a virtue when moderating and guiding an interview. I have found that I am an esteemed company (Cain, 2013; Wolcott, 1990). I also have a healthy skepticism. In the words of Robert Rubin (2004), “Some people are more certain of everything than I am of anything” (Cain, 2013, p. 97). The quest for greater clarity forces me to search out all leads and develop a deep intimacy with the data. Despite my role in the larger RCT as supervisor of the interviewers and qualitative analyst, I do not have close connections with the phenomena of exercise adherence or heart conditions in my life. I do have multiple chronic illnesses. I intimately understand pain, medication issues, and side effects. 6

CHAPTER 2: Literature Review

Conceptual Framework

Randomized controlled trials (RCTs). Randomized experimental research is seen as the benchmark for rigorous, scientific research (Shadish, Cook, & Campbell,

2002). The RCT is expectedly accorded the gold standard with platinum status bestowed upon veins of research where data were gathered from multiple RCTs (Bolton, 2008).

RCTs are considered the pinnacle of evidence within health communities such as in evidence-based medicine (EBM) (Devisch & Murray, 2009; Grossman, 2008).

Randomized controlled trials are often employed for their potential to clearly illuminate outcome differences among groups that are attributable to treatment effects.

Interventions are formulated to uncover the “active causal component” or in lay terms, what works (Bolton, 2009, p. 161). These trials work well in highly controlled settings such as in a medical environment where intervention protocols and procedures, like random assignment, can be carried out. RCTs are best implemented in structured studies designed to verify outcomes. RCTs function to generalize these discovered outcomes to various selected populations. These interventions often lack the contextual factors, which add dimension and richness (Goldberg, 2006; Muncey, 2009). Personal experience

(patient or medical personnel) and idiosyncrasy is where the RCT “ loses traction” (Bolton, 2009, p. 163; Devisch & Murray, 2009). There are many reasons qualitative data are added during the mixed methods intervention trial: substantiating the quantitative data, providing increased understanding of the lived experience of the participants through the trial, identifying constructs that may directly or indirectly influence the outcome, understanding unplanned occurrences, and exploring how 7 contextual factors interact with the treatment (Creswell, Fetters, Plano Clark, & Morales,

2009; Sandelowski, 1996; Song, Sandelowski, & Happ, 2010; Spillane, Stiziel Pareja,

Dorner, Barnes, May, Huff, & Camburn, 2010). Conceptualizing and integrating diverse methods into a program of inquiry is increasingly utilized with RCT frameworks (Song,

Sandelowski, & Happ, 2010).

Hearts on track study. The study that provided the context for research on the methodological issue of interview fidelity was an RCT from the nursing field. This trial examined exercise adherence among heart failure patients over an 18-month period. The intervention included access to an exercise facility, group sessions for the behavioral strategies for exercise, and an exercise coach for individualized attention and additional exercise approaches. Standard care participants received facility access only.

The qualitative strand explored the perceptions and experiences of participants and study personnel in an effort to contextualize the exercise adherence experience.

Specifically, this qualitative purpose is outlined in the grant proposal: Aim 5) We will interview the study participants, individual coaches, and group session leaders during the adoption, transition, and maintenance phases of the Hearts on Track intervention. These interviews served to contextualize the experiences of those who were participating in the trial. On occasion, in health research trials, naturalistic research is added to an intervention to individualize the health or illness experience and provide compelling patient perspectives that can impact how the results are conveyed and policy implemented (Melia, 2013; Tatano Beck, 1993). Intervention group participants were longitudinally interviewed at four time points during their participation: 3 months, 6 8 months, 12 months, and 18 months. Coaches and group session leaders were also interviewed at these time intervals (Pozehl et al., 2014).

Since this RCT study included a strong qualitative aim, it was a better conceptual fit for a mixed methods RCT study. However, this was not how this project was originally presented to the funders; but it was informally recognized by the nursing researchers as a mixed methods study during the research process. For the purposes of this dissertation project, it made the soundest conceptual sense to view this project as a mixed methods RCT project. The primary project resembled a parallel mixed design where the strands were relatively independent and they addressed related parts of the (Teddlie & Tashakkori, 2009). The interviews serve to enhance and provide detailed contextual information on the experience of adherence. These data may be complementary or divergent with the results of the intervention, but the overall purpose of mixing was complementarity (Greene, 2007). The challenges in aspiring to interview fidelity were directly related to the specific issues from this study.

Treatment fidelity framework. Treatment fidelity is connected to validity and is a part of experimental, objectivist research. “Treatment fidelity refers to the methodological strategies used to monitor and enhance the reliability and validity of behavioral interventions” (Bellg, et al., 2004, p. 443). Treatment fidelity is vital to RCTs in (Bruckenthal & Broderick, 2007). Fostering treatment fidelity increases both internal and external validity (Bellg, et al., 2004; Borrelli, et al., 2005;

Dyas, Togher, & Siriwardena, 2014; Moncher & Prinz, 1991). Thus, treatment fidelity allows for accurate answers to the research questions regarding the effectiveness of the 9 intervention directly on the outcome, creates precision in replication, increases power and effect size through minimizing statistical variability, and furthers generalization.

Treatment fidelity has been conceptualized by researchers in a number of ways from general frameworks to individualized instruments and specific strategies. Beginning at the macro-level, Bellg and colleagues (2004) at the National Institutes of Health

Behavior Change Consortium (BCC) Treatment Fidelity Working Group developed five components for ensuring treatment fidelity: design, provider training, delivery of treatment, receipt of treatment, and enactment of treatment skills. These make a suitable framework for thinking about and conceptualizing another form of fidelity, interview fidelity. Many in the nursing field have utilized and built on this work, offering strategies and lessons learned from implementation, exploring meaning in variations in fidelity data, and overcoming technological challenges (Bozak, Pozehl, & Yates, 2012;

Carpenter, et al. 2013; Resnick, et al., 2005a; Resnick, et al., 2005b; Resnick, et al., 2009,

Yates, et al., 2013). Similar to this model and developed from the literature, Gearing, El-

Bassel, Ghesquiere, Baldwin, Gillies, & Ngeow (2011) composed their four category

(five sub-category) comprehensive intervention fidelity guide (CIFG) checklist. Much earlier, Moncher and Prinz (1991) offered considerations such as supervision of treatment agents training and treatment manuals. Others have customized treatment fidelity to their projects. Black, Wenger, and O’Fallon (2015) used a grounded approach to ensure that key behavioral components of the intervention were a part of their fidelity instrument.

Song, Happ, and Sandelowski (2010) based their intervention fidelity model on elements of the intervention, literature, and theory. 10

Treatment fidelity should be dutifully considered and applied throughout all phases of the research process from the planning through the completion of the study to prevent problems from arising: poorly crafted/irreplicable research design, attrition of research personnel/participants, differential training of personnel, unequal implementation of the intervention (Bellg, et al., 2004; Gearing et al., 2011). Given the vital nature of intervention fidelity, its breach is quite a serious matter. Dyas et al. (2014) in a pilot study, found instances in which fidelity was violated. This breach was not isolated but spread into other areas of the intervention and served to sabotage the intervention. One particular aspect of the research that may be overlooked but is fundamental is the overall theoretical underpinning of the study. “The needs of each study are different and ideally the components of the treatment fidelity plan are selected on the basis of the theoretical and clinical framework for each intervention” (Bellg, et al.,

2004, p. 450-451). The concept of fidelity, along with theory and research epistemologies, will help give form to the development of valid interpretive interviews within a mixed methods study.

Interview fidelity framework. In an interpretive research context, the researcher is often viewed as the instrument (Kvale & Brinkmann, 2009). With any instrument, there can be variety in how it is designed, implemented, and experienced. So, how does one guarantee the skillful implementation of interpretive interviews in a longitudinal, mixed methods RCT? What considerations are there when designing interviews? Are there special strategies with multiple interviewers across several sites? How could interrater reliability be established? What issues arise during the course of interviewing 11 longitudinally as a part of an RCT that require attention? These are issues to consider when conducting interviews as a part of a mixed methods clinical trial.

The process of arriving at interview fidelity ideals is proposed as a possible solution to this array of challenges. The researcher will iteratively process data from the broader study, personal experiences, and ideas from the expansive literature on treatment fidelity and validity to arrive at interview fidelity ideals within the particular context of the research setting. The researcher used a variety of devices as scaffolds for this process

(see Appendix A).

The term, “interview fidelity” does exist in the literature. However, the meaning of interview fidelity was heretofore limited to highly contexts where interview questions were read verbatim (Butterfield, Borgen, Amundson, & Maglio,

2005; Butterfield, Borgen, Admundson, & Erlebach, 2010; Kissi, Dainty, & Liu, 2012;

Nascimento, Majumdar, & Jarvis, 2012). What is being advocated here is an expanded and more nuanced view of interview fidelity as quality so that it can be applied to additional interview formats, such as interpretive interviews. As such, it can be viewed as the devising of a new use for the term.

Interviews

Interviewing as a form of data collection can provide valid data in a range of forms. Researchers add richness to their approach when they employ interviews as a method, a methodology, or as a method in the service of another methodology such as or (Platt, 2012; Trainor 2013). Defined in these ways, interviews can take several perspectives based on differing ontology. Schwandt (2001) dichotomizes two viewpoints. The first behavioral model is one of direct and unfettered access to reality by 12 asking solid, well-prepared, postpositivist questions. The participant responds by pouring out the answers in what is often referred to simply as a stimulus response reaction. The second model involves the co-construction of knowledge to better elucidate participants’ perspectives rather than yielding more narrowly focused responses to the interviewer’s predetermined questions.

It is within the spirit of the latter definition that this dissertation research is based.

Simply, an interview can be thought of as an “inter view, an inter change of views”

(Kvale, 1996, p. 2). Interpretive interviews were selected to contribute to the context of the RCT by drawing on the participants’ sense of the importance of the individual and his or her recounted and reflected upon experience with long-term exercise with chronic heart failure. Thus, the research team had an epistemological vision that data are generated through the interactional nature of interviews (Mason, 1996).

History of interviews. No matter its use, function, definition, the interview has undergone change throughout its existence over time and across disciplinary frameworks.

There have been many different researchers with a myriad of ontological and epistemological viewpoints contributing to the scholarship and enriching the dialogue surrounding interviews. Variability in praxis remains within and across fields; each handling interviews with slightly different tools and techniques.

Platt (2012) acknowledges the difficulty in pinning down such an expansive and elusive construct as the interview but attempts to give an overarching historical review, revealing that interviews did not develop in a systematic, linear path. Some of the earliest interview research did not refer to the term interview but spoke of a divergence from the questionnaire. During the Depression era and onward, interviews were used to build oral 13 histories by obtaining the personal accounts of the suffering of the downtrodden and disenfranchised (Bogdan & Biklen, 2007). Later work was derivative of marketing and political research and the modern survey with its fields of closed and open responses and perpetual quest for reliability and validity. Next, naturalistic and postmodern perspectives allowed flexibility in structure, saw the participant as a collaborator and viewed the interviewer as the research tool. As interviews remain a heavily utilized research method, challenges continue to be put forward to the interview. Technical challenges, such as question wording, are seen as easily solved and controlled whereas epistemological complexities (like the sociohistorical context of the interview) influencing what we consider knowledge prove much more difficult to decipher (Gubrium et al., 2012).

Gubrium and Holstein (2002) offer yet another historical view. Their focus on the roles of the participant in the interview process illustrate a shift from a “passive vessels of answers” to active co-constructor of data (p. 13). (Emphasis in original text.).

Typology. A common classification strategy for interviews is structural. Merriam and Tisdale (2016) offer categories: highly structured or standardized, semistructured, unstructured or informal. The sub-study that serves as the context for this dissertation resides in the middle. It has a strong semi-standardization framework structure, but also some room for adjustment of the questions by the interviewers (Berg, 2008).

Interpretive interviews. Numerous terms exist in the literature to describe the nature of these interviews: generic qualitative interview, open-ended interview, intensive interview, in-depth interview, semi-structured, active interview, dramaturgical interview, reflective, and ethnographic interview (Atkinson & Silverman, 1997; Berg, 2008;

Holstein & Gubrium, 1995; Johnson & Rowlands, 2012; King & Horrocks, 2010; 14

Merriam, 2009; Roulston, 2010; Seidman, 2006; Spradley, 1979; Warren, 2002).

Commonalities among and features of these include flexibility, open and exploratory form, depth and disclosure of information, experienced-based, contextual, emotive, inductive, relational and interpersonal, and familiarity and closeness (Johnson &

Rowlands, 2012; King & Horrocks 2010). The inherent advantage in these interviews is in the intimacy and depth they offer, “the opportunity for an authentic gaze into the soul of another” (Atkinson & Silverman, 1997, p. 305). Seidman (2006) discussed the purpose of in-depth interviews as ascertaining understanding lived experience of others and gaining context for behavior as a pathway to grasp meaning. The relational aspect is also emphasized as rapport is deemed vital to co-construction of knowledge. Rapport is a delicate feature in a research relationship, in which just the right amount must be cultivated for optimum effectiveness. Too much rapport can lead to blurred lines and too little prevents disclosure. Mutual trust and comfort allows for the free creation and exchange of information (DiCicco-Bloom & Crabtree, 2006). Borer and Fontanta (2012) extend thinking on the interpersonal context of interviews in their review of postmodern- informed interviewing. With such a lens, there is less distinction between interviewer and interviewee roles, collaborative relationships among researchers and participants, awareness of power and privilege, voice and polyphony, variety and creativity in presentation formats including poetry and drama, embracing of technology for research purposes, and a returned focus on the senses.

Health sciences. Interpretive interviews have a rich history and a prominent place in the helping and health sciences (Low, 2007; Miczo, 2003; Padgett, 2012). As a research tool within the health care field, the in-depth interview is extensively used 15

(DiCicco-Bloom & Crabtree & 2006). The provider/patient and interviewer/relationship roles are explored with awareness of challenges along with some guidelines and strategies for future researcher practitioners (Mishler, 1984; Nelson, Onwuegbuzie,

Wines, & Frels, 2013; Munhall, 2012a; Zoppi & Epstein 2002).

Mixed methods. Interviews of all types are utilized in mixed methods studies

(Brannan & Halcomb, 2009; Morse, 2012). Interviews are the most common form of qualitative data in mixed methods studies in the health sciences (O’Cathain, Murphy, &

Nicholl, 2007). In-depth interviews for the creation of text-based data are a central component of mixed methods inquiry (Johnson & Turner, 2003; Padgett, 2012).

Longitudinal. As is the case in more traditional forms of quantitative research, qualitative longitudinal research also examines change over time. In fact, qualitative research often concerns itself specifically with the study of a process rather than focusing on more static information reflecting a single point in time. If within the resources and scope of the project, it is advantageous to pursue such data as this accumulation of information provides strength over “one-shot” interviews (Warren, 2002, p. 98). An additional benefit of multiple interviews over time is that a trusting relationship and sense of rapport can develop, enabling a space where interviewer and interviewee can collaborate (Padgett, 2012). In such a milieu, participants can freely share descriptive and contextual information as well as feel safe in revealing their emotions and personal stories (Grinyer & Thomas, 2012).

Question formation. Many authors proffer their ideas for the features of the ideal question. The perfect question should be open-ended, clear, accessible (in terms of language), neutral (not leading), humble, research-focused, meaningful, relevant (to 16 participant experience), sensitive, ethical, advance the interview, and encourage focus

(Berg, 2008; Hatch, 2002; Mason, 1996). Rubin and Rubin (2005) provide thorough and cogent source material for question composition. These authors cover main questions, follow-up questions, and probes. In writing these questions, considering the research questions and reflecting on and aligning the epistemological stance with the interview questions should be part of the process (Trainor, 2013).

Other researchers discuss quality by sharing mistakes and pitfalls. Interviewers should do their best to avoid certain types of questions, such as: leading questions, complex questions, or multipart questions, and yes or no questions (King & Horrocks,

2010; Merriam, 2009; Merriam & Tisdale, 2016). Interviewers must also take notice of their responses during an interview that could be detrimental: obvious responses, lack of listening, and obtrusive non-verbal messages (King & Horrocks, 2010).

Methods. Researchers must consider many methodological choices when planning and implementing interpretive interviews. Procedures surrounding the interview site and participants as well as recording and note taking are explored and documented to provide an overview of these decision points, which will be essential to understand as the results ensue.

Site and participants. Selecting a site that is accessible and comfortable for all parties is not an easy decision (Kelly, 2013). An environment that is safe, quiet, and private is ideal. Office settings, coffee shops, parks, and participant homes are all common locations. Each has its own set of benefits and drawbacks. For instance, a participant’s home may be highly comfortable, but also highly distracting since others may be present, the television may be on, and supper may be on the stove. The 17 participant may or may not fully disclose in front of such an audience and may or may have increased time pressures from being in the home. The interviewer may not feel safe traveling to an unknown location. Conversely, scheduling the interview in a location more convenience to the interviewer like a campus office may not feel safe or neutral to the participant. A coffee shop is neutral to both the interviewer and interviewee, but may be noisy and make transcription difficult. Sensitive topics are difficult to cover in a public place. Interviewers need to consider and prioritize which of the characteristics are most important, since some may be at odds with one another (King & Horrocks, 2010).

Typically, the participant’s comfort is the foremost concern and researchers therefore attempt to accommodate their requests as closely as possible unless there are real, substantive reasons to negotiate the setting.

Interpretive interviews generally draw on some form of purposeful sampling technique (see Creswell, 2013 for an extensive list of options in qualitative research). The total sample size is not commonly tied to a numeric metric but rather saturation of data or, in some forms of qualitative research such as , theoretical saturation

(Glaser, 1978; Glaser & Straus, 1967; Guest, Bunce, & Johnson, 2006; Kelly, 2013;

Strauss & Corbin, 1990; 1998; Trainor 2013). Participants are recruited until the research questions have been addressed fully. Another decision pertains to format, whether to conduct individual interviews or group interviews. In terms of logistics, one-on-one interviews are easiest to schedule and also provide the most privacy and perceived confidentiality. Participants may feel they are able to share more in this environment

(Beitin, 2012). 18

People are unique and so it follows that what they share would also be distinctive.

Depending on the research question, some interview participants are much better suited to discussing the phenomenon of interest than others (Kvale, 1995). Perhaps the two most salient points to this process are the knowledge or experience of the phenomenon under study and the ability and willingness to effectively communicate with the interviewer

(Trainor, 2013; Warren, 2002). Merriam (2009) offers insights on a skillful participant being one who “can express thoughts, feelings, opinions—that is, offer a perspective—on the topic being studied” (p. 107). Interviewers should look for other imperative interviewee characteristics: cooperative, motivated, eloquent, knowledgeable, truthful, consistent, concise, precise, coherent, without contradictions, focused, intelligent, reflexive, and a good story teller (Johnson & Rowlands, 2012; Kvale and Brinkmann,

2009). It is worth taking a chance on a participant; there are pleasant surprises to be found.

Recording and note taking. Most contemporary researchers advocate for the use of audio recording to help obtain a full and accurate record of the interview experience.

As with all technology, testing the equipment beforehand and maintaining it over time is essential to obtaining the best sound quality possible. Good sound quality has a direct bearing on the quality of transcription (King & Horrocks, 2010). Some participants may feel uneasy with recorded data and may talk freely once the recorder is turned off. They may feel as though they are free to cover topics of their choosing or may want to share something off the record. Either way, this unrecorded data has value and relevance. It can offer profound insights and depth to the data that were shared previously during the recording (Warren, 2002). While an interviewer should leave the recording device on for 19 the entire interview and even some time after, it is a good practice to leave some space of unrecorded time to allow for unplanned disclosures. The interviewer may also find it of use to verbally record notes and memos while fresh, immediately following the interview session to bolster analytical reflection at a later date.

Concurrent note taking along with audio recording is good interview practice

(Kelly, 2013). Notes taken during interviews provide additional data that are not captured by the audio recording and thus by the transcription. For example, notes provide details that are not present in the words, such as non-verbal messages and afford the interviewer the opportunity to better pinpoint and reflect upon key passages of the responses (King &

Horrocks, 2010). If the notes are taken directly on the interview protocol during the interview, these notes can be aligned with the interview transcript and reviewed during analysis. The conclusion of the interview is another prime time to record field notes regarding the interview before crucial information is lost (Johnson & Rowlands, 2012).

Implementation. Interpretive interviews are conducted in a conversational style but are not conversations; they are a data-generating platform (Kvale, 2009). There is a general rhythm or pattern to the conduct of interviews that becomes familiar with practice. Interviewers begin by introducing themselves and the research project, discuss the researcher role and expectations of them as a participant, and encourage participants by reminding them of the vital information they have to share (Rubin & Rubin, 2011).

Some casual conversation before the interview is a friendly way to get the participant talking and begin to build trust. Informed consent must be obtained (Hatch, 2002;

Warren, 2002). Unthreatening, easy questions, gather basic information of a descriptive nature pertaining to the general topic at the start of the interview (King & Horrocks 2010; 20

Warren, 2002). Questions pertinent to the topic and of greater specificity as well as sensitive questions follow (Kelly, 2013; Rubin & Rubin, 2011; Warren, 2002).

Interviewers listen, and strive not to judge (Rubin & Rubin, 2011). The interview concludes with a summarization of key ideas. The interviewer asks if there is anything else the participant would like to add or if there are any lingering questions (Hatch, 2002;

King & Horrocks, 2010; Warren 2002). Rubin and Rubin (2011) remind us to return to a relaxed place if the interview was emotional or stressful. Always thank the participants for giving of their time and talents.

Mason (1996) provides a detailed example of researcher thinking during an interview, displaying some of the many decisions that must be made both before and during an interview.

“At any one time you may be: listening to what the interviewee(s) is or are

currently saying and trying to interpret what they mean; trying to work out

whether what they are saying has any bearing on ‘what you really want to know’;

trying to think in new and creative ways about ‘what you really want to know’;

trying to pick up on any changes in your interviewees’ demeanor and interpret

these, for example you may notice they are becoming reticent for reasons which

you do not understand, or if there is more than one interviewee there may be some

tension developing between them; reflecting on something they said 20 minutes

ago; formulating an appropriate response to what they are currently saying;

formulating the next question which might involve shifting the interview onto

new terrain; keeping an eye on your watch and making decisions about depth and

breadth given your time limits. At the same time you will be observing what is 21

going on around the interview; you may be making notes or, if you are audio or

video tape recording the interview, keeping half an eye on your equipment to

ensure that it is working; and you may be dealing with ‘distractions’ like a wasp

which you think is about to sting you, a pet dog which is scratching itself loudly

directly in front of your tape recorder microphone which keeps ringing, a child

crying, and so on” (p. 45-46).

Given all these steps and tasks, sufficient time is needed to adequately prepare for an effective interview (King & Horrocks 2010). The substantial planning and preparation involved in interpretive interviews can exceed that of structured interviews. Such complexities lie in the flexible and responsive flow of open questioning and the decisions regarding “substance, style, scope, and sequence” (Mason, 1996, p. 43) that accompany these. Pilot testing is another avenue for discovering errors or weaknesses and allowing for question refinement to increase validity (Dikko, 2016). On the contrary, testing a question with participants like those who are to be studied can provide positive feedback.

It can show support that the items are clear, well understood, and a good prompt for an articulate response (Merriam, 2009; Turner 2010). Interviewers must be equipped to handle all manner of participant responses and behaviors (Roulston et al., 2003).

Interviewer skills. It is expected, provided the intricacy of interpretive interviews, that interviewers would need to build a broad skill set. Kvale and Brinkmann

(2009) view the interviewer as a craftsman and lists characteristics of this persona: knowledge, structure, clarity, gentleness, sensitivity, openness, steering, criticalness, remembering, and interpreting. Most appreciate the difficulty of interpretive interviews and see interviewing as a skill that can be honed with practice (Johnson & Rowlands, 22

2012; King & Horrocks, 2010; Seidman, 2006). Central to the interviewer identity is active listening. It is imperative that the interviewer be a keen listener and balance the level of dialogue appropriately between talking and listening (Kelly, 2013; Mason, 1996).

Talmage (2012) encourages listening for things said and not said as well as listening through our many filters. Hatch (2002) reminds us that like all skills, listening is one we must practice in order to perfect, “Listening like a researcher is hard work, and it takes practice to do it well” (p. 109). There are many techniques to master: facilitation, rapport building, communication, question asking (and follow-up/probes), remembering, not interrupting, being present, allowing space for silence, understanding non-verbal cues, using intuition, being empathetic, observing, note taking, and generally being prepared for the unexpected (Kelly 2013; Kvale & Brinkmann, 2009; Mason, 1996; Patton, 2015;

Seidman, 2006).

Research Paradigms

Before issues of validity can be approached to fully contextualize fidelity for interviews as a research tool, matters of philosophy of science must first be addressed and clarified. As succinctly stated by Greene and Hall, (2010) “Philosophy of science matters to social inquiry” (p. 121). Narrowing, paradigms are a central construct within the research literature base, particularly within mixed methods research literature where distinct and diverse paradigms often meet (Biesta, 2010; Greene, 2007). Mixed methods research operates across, within, between, and at the intersection of research paradigms.

Researchers must have some awareness and level of competency to blend distinct types of research. Research philosophy is utilized in this analysis to expound the interview 23 fidelity ideals. To understand why philosophy of science is critical, one must look back on some historical moments that serve to define the history of the research enterprise.

History of paradigms. Thomas Kuhn (1962) in his seminal work The Structure of Scientific Revolutions introduced the rather amorphous, yet important construct of paradigms to the social sciences from the natural sciences. Kuhn based this premise on his historical examination of the advancement of the knowledge base of normal science with revelatory bursts of novel thinking rather than with incremental steps built on prior learning. “Once a first paradigm through which to view nature has been found, there is no such thing as research in the absence of any paradigm” (Kuhn, 1970, p. 79).

When the current philosophy could no longer adequately answer the research questions, a time of crisis would produce a new paradigm that would subsume the old ways of thinking. In order for the new paradigm to be of worth, it must advance the field by answering previously unsolvable research questions while still maintaining the current level of capacity as its forbearers. These times of growth and scientific discovery were referred to by Kuhn as “scientific revolutions” (1970, p. 92). These scientific revolutions have come to be known more colloquially as paradigm shifts. Often, the charge for change was led by those younger and less entrenched in the current ways of thinking.

With his summative work on scientific research, Kuhn laid the groundwork for the way research is conceptualized. Perhaps, inadvertently, he opened a Pandora’s box of paradigms that could not be closed. In the second edition, Kuhn (1970) clarified the definitions of paradigm he employed in his later writing beyond beliefs and values to encompass members of a scientific community and also shared examples (Kuhn, 1970).

Social and behavioral scientists have used and misused the term since his first work in the 24 early 1960s. Kuhn started the discourse surrounding paradigms and paradigm incompatibility issues that would be later referred to as the paradigm wars.

Paradigms defined. Methodologists with interpretive leanings utilized Kuhn’s paradigm principle to theoretically justify their approach to research. The for this new form of inquiry was constructed using highly theoretical language. Broadly, paradigms were “whole systems of thinking” (Neuman, 2011, p 94). Narrowing down, paradigms were seen as

“Basic belief systems based on ontological, epistemological, and methodological

assumptions…It represents a worldview that defines, for its holder, the nature of

the ‘world,’ the individual’s place in it, and the range of possible relationships to

that world and its parts” (Guba & Lincoln, 1994, p. 107).

More simply, paradigms are “a basic set of beliefs that guides action” (Guba, 1990, p.

17). Researchers were encouraged to plan and align their research so that it not only answered the research questions, but also had a coherent meaning among the levels of paradigmatic assumptions (Denzin & Lincoln, 2011b; Lincoln, Lynham, & Guba, 2011).

Starting with the highest level of abstraction, the pinnacle, leads us to ontology, the entire

“nature of reality” (Guba & Lincoln, 1994, p. 108), the study of “being” (Crotty, 1998) or

“existence” (Hesse-Biber, 2010, p. 456). To center oneself about such an intangible construct, one might search for the answer to the question “What kinds of things are there in the world?” (Benton & Craib, 2001, p. 4).

Next, researchers align epistemology, which is considered a general theory of knowledge. It explains how we know what we know now. It shapes what we aim to know through the inquiry process and influences or limits what we believe is even possible to 25 know (Benton & Craib, 2001; Graue & Karabon, 2013; King & Horrocks, 2010; Mason,

1996). Carter and Little (2007) emphasized the theoretical portion in their definition and considered epistemology simply as “justifying knowledge” (p. 1317). Hesse-Biber (2010) focused on the creation of information with her conception of epistemology as knowledge building.

Subsequently, methodology is added to the inquiry process. Methodology is best described as a process where researchers offer explanations for their particular selections for design, data collection, data analysis, and interpretation (Graue & Karabon, 2013;

King & Horrocks, 2010). Methodology provides a comprehensive justification of methods (Carter & Little, 2007) or a way of connecting methods to outcomes (Crotty,

1998).

Ultimately, with methods, the final level of the research process is attained and explicated. Methods refer to the individual research tools and techniques that researchers employ as well as the procedural steps through the entirety of the research process (Graue

& Karabon, 2013; King & Horrocks, 2010). Another type of perspective enables researchers the means of “interfering with…the world” (Mol, 2005, p. 304). For instance, in the current dissertation, interviews are a research method. With research methods, the process becomes not merely theoretical but active, involving both the interviewer and participants (Carter & Little, 2007).

Paradigm wars. The longstanding research paradigm was objectivist in its form, better known as or later as the more progressive postpositivism (Crotty, 1998;

Teddlie & Tashakkori, 2009). Interpretivists or constructivists, railed against the existing research tradition utilizing Kuhn’s work in which belief in two paradigms was 26

“incommensurable” (Kuhn, 1970, p. 198) due to the philosophical underpinnings.

Paradigms were articulated such that the canons were clear and thoroughly demarcated

(Bryman, 2006). Incommensurability served to prevent communication among different inquiry paradigms, and so there was an era where diametrically opposed camps were formed and flourished (Schwandt, 2000). The incompatibility thesis became part of the academic climate, impeding research with integrated quantitative and qualitative elements and deeming them unacceptable (Howe, 1988; Teddlie & Tashakkori, 2009).

These differences would come to be known as the paradigm wars (Denzin & Lincoln,

2011b). Methodologists differ marginally, but the 1980s are generally considered as the height of what was referred to as the paradigm wars (Gage, 1989; Lincoln, 2009). Guba’s

(1990) seminal work bringing together the work of so many scholars with divergent perspectives in dialogue signaled the ending of the wars (Denzin & Lincoln, 2011).

Paradigm peace. The paradigm wars are considered to be settled, peace prevails, and inquiry integration is accepted as valid research practice (Bryman, 2006; Denzin &

Lincoln, 2011b; Guba, 1990; Muncey, 2009; Twinn, 2003). Some are beginning to argue whether the paradigmatic conflict ever was problematic in the wider research literature beyond the social sciences (Maxwell, 2015). In later works, Kuhn (1970) progressed in his thinking to advocate active communication processes and persuasion to overcome paradigm conflicts. Howe (1988) offered the compatibility thesis long ago to encourage combining disparate methods regardless of any perceived ties to epistemology. Thus began the uncoupling of particular methods from aligning with particular philosophies

(Johnson & Onwuegbuzie, 2004; Reichard & Cook, 1979; Tashakkori & Teddlie, 2003).

As argued by Teddlie and Tashakkori (2012), “Paradigms can be associated with any 27 given method. If researchers desire to use QUAN or QUAL methods exclusively, then that decision should be based on their research questions, not some link between epistemology and methods” (p. 779-780). Methods were seen as pivotal to escaping the paradigmatic entrenchment. “Paradigms bring themselves into some reasonable state of equilibrium with methods” (Howe, 1988, p. 13). There was a discernable difference in views of compatibility among those that worked at the philosophical level with epistemological issues versus those that operated at a technical level with methods. The former believe in incompatibility; the latter considered integration appropriate (Bryman,

2006). But, as paradigms are separated from methods, it must be noted that only the notion of specific paradigms from specific methods as aspects of the inquiry process (i.e. postpositivism and survey or constructivism and interview) are being severed.

New terms have come into use based on the integration of methods.

Methodological eclecticism is the selection and assimilation of the best fitting methods for the study from among the many possible. Being a connoisseur of methods means the researcher is skilled at choosing research tools wisely to address the phenomenon of interest (Denzin & Lincoln, 2011; Teddlie & Tashakkori, 2012). These terms only exist with a compatibility viewpoint and a denunciation of the incompatibility of methods thesis.

Amid the broad research literature, it should be noted that there are small voices of caution and discontent that remain with the paradigm debate (Lincoln, Lynham, &

Guba, 2011). The intertwining of method and paradigm is still seen as essential and retained with an albeit slightly more moderate “soft incompatibility thesis” (Yanchar &

Williams, 2006). For these researchers, divides remain at the philosophical level with 28 possible room for integration at the practical level, methods. Carter and Little (2007) offer a quality framework based on the alignment of epistemological position, methodology, and methods. Therefore, researchers can find some common ground for considering and incorporating paradigms in their process of inquiry.

Paradigm relevance. Paradigms have their place in the inquiry process. A reasoned and logical description of how the inquiry process was navigated must be considered along with how the phenomena were investigated under the ontological discretion of the research paradigm(s) at work. This information should be fully explicated for the researcher(s), funders, peer reviewers, participants, and others invested in the research endeavor. How these paradigms interacted and influenced the inquiry process should also be part of this account. Sadly, this is not happening in the current body of literature. Alise and Teddlie (2010) found only one article in their survey of 600 that explicitly referred to paradigms in their research in a top journal in the social sciences. “Paradigm issues are crucial; no inquirer, we maintain, ought to go about the business of inquiry without being clear about just what paradigm informs and guides his or her approach” (Guba & Lincoln, 1994, p. 116). Without realizing it, the power of assumptions steer our research and can hide much if not properly examined.

“At every point in our research—in our observing, our interpreting, our reporting,

and everything else we do as researchers—we inject a host of assumptions. These

are assumptions about human knowledge and assumptions about realities

encountered in our human world. Such assumption shape for us the meaning of

research questions, the purposiveness of research methodologies, and the

interpretability of research findings. Without unpacking these assumptions and 29

clarifying them, no one (including ourselves!) can really divine what our research

has been or what it is now saying (Crotty, 1998, p. 17).

Many have recognized the dearth of ontological and epistemological reflection and have called for explicit examining of these assumptions and beliefs (Bryman, 2006; Greene &

Hall, 2010; Mesel, 2013). The importance of paradigms should not be understated

(Creswell & Plano Clark, 2013; Johnson & Onwuegbuzie, 2004; Lincoln, Lynham, &

Guba, 2011; Morgan, 2007; Teddlie & Tashakkori, 2011). Their impact and influence are far reaching and researchers must recognize this certainty (Munhall, 2012b). Researchers as well as participants are affected by the research paradigm. All phases of research are touched by the fingerprint of a research paradigm. Readers and policy decisions are influenced as well.

“Paradigms and metaphysics do matter. They matter because they tell us

something important about researcher standpoint. They tell us something about

the researcher’s proposed relationship to the Other(s). They tell us something

about what the researcher thinks counts as knowledge, and who can deliver the

most valuable slice of this knowledge. They tell us how the researcher intends to

take account of multiple conflicting and contradictory values she will encounter.

These questions need to be addressed…What does it mean in terms of researcher

assumptions? What does that mean for how we read the research findings? For

how we use knowledge to formulate policy? For how we serve the means and

ends of social justice? There is much at stake here. ” (Lincoln, 2009, p. 7)

(Emphasis in original).

Research philosophy is impactful whether intended or not. 30

Inquiry paradigms. With the importance of paradigms well established and to aid in conceptualizing interview fidelity constructs across these paradigms, several of the prominent research paradigms will be discussed: positivism, postpositivism, constructionism/constructivism/interpretivism, postmodernism/post-structuralism, transformative, and pragmatism. This list is not intended to be exhaustive of all possible research paradigms. Descriptions of paradigms will focus on the major tenets and historical record to provide a greater understanding of their application. These accounts will serve to educate and delineate the types of available research foundations. Paradigms have expanded and flourished in the research literature (Denzin & Lincoln, 2011a).

Researchers, particularly mixed methods scholars, must have an awareness of these paradigmatic principles in order to make informed choices in their studies (Teddlie &

Tashakkori, 2009).

It is also worth explaining at this point that there is not a quantitative or qualitative paradigm. Quantitative and qualitative labels mean many things: specific methods (Crotty, 1998), labels for data (Greene & Hall, 2010), types of research

(Morgan, 2007), and epistemological and ontological assumptions (Biesta, 2010). Often, the terms quantitative and qualitative serve as an incorrect proxy for particular paradigms, “quantitative paradigm, qualitative paradigm, and mixed methods paradigm”

(Teddlie & Tashakkori, 2012, p. 780).

Positivism. Growing out of Enlightenment thought where science was seen to elevate humanity is where positivism’s nascent beliefs were cultivated (Bernard, 2006).

Positivism is concisely summarized as posited statements based on observation and experience and tested by the (Crotty, 1998). Neuman (2011) offers 31 another definition of positivist highlighting many important tenets, “an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilistic causal laws that can be used to predict general patterns of human activity” (p. 95). The philosophy of logical positivism advanced the position of science by asserting it was the only way to truth or genuine meaning (Borg & Gall, 1989). Claims must be substantiated with verifiable evidence and systematic observation (Macionis, 2015). Knowledge is predicated on experience (Bernard, 2006). These beliefs coalesced with Vienna Circle researchers in Europe in the early part of the twentieth century and disseminated around the Second World War as this group scattered around the globe spreading their ideas.

Later, around the turn of the twenty-first century, positivism made some advances while retaining some elements of its roots. Positivism was still very much empirically based (Crotty, 1998). It remained tied to its history with the natural sciences stemming from objectivism even as it bridged over into to the social sciences and aimed at prediction in these fields as well (Delaney, 2014). At its core was still an empirical belief in knowledge. was denoted with a strong belief in experiential knowledge; observable knowledge; testable knowledge; patterns of experience, prediction, and (Benton & Craib, 2001). For positivists, scientific knowledge was always without question accurate, objective, and value-free (Crotty, 1998, Neuman, 2011).

Ontologically, reality and truth absolutely exists in the form of realist and critical realist beliefs (Denzin & Lincoln, 2011b; Guba, 1990; Guba & Lincoln, 1994; King &

Horrocks, 2010). Epistemologically, positivists prefer precise and careful measurements, which lend themselves to quantitative, objectivist, numerical data from , 32 quasi-experiments, and surveys (Denzin & Lincoln, 2011b; Macionis, 2015). These aggregated data are subjected to descriptive and inferential statistical analysis as well as possible replication studies and generalization to other populations (King & Horrocks,

2010).

Postpositivism. Post-positivism takes a rather different ontological position in which reality is viewed as less certain and only a matter of approximation. Within this body of literature exists an uncertainty principle, casting doubt on the absolute objectivity of knowledge (Crotty, 1998). Relative statements supplant objective declarations.

Falsification replaces verification. Truth has a provisional rather than an absolute tone.

The aim of inquiry is to get as close as possible to truth by answering research questions using statistical means (Lincoln, Lynham, & Guba, 2011). Hypothesis testing and statistical methods alone are not enough; one must gather substantial and solid evidence

(Wasserstein & Lazar, 2016). Therefore, while positivism and postpositivism share the same overall ontology of the realist perspective, epistemologically there is an increased tentativeness, a modulated objectivity, in truly discovering that reality (Guba, 1990).

Denzin and Lincoln (2011b) reveal “only partially objective accounts of the real world can be produced, for all methods are flawed” (p. 15).

Constructionism/constructivism/interpretivism. Researchers who viewed human agency and interaction as unique and different from anything in the natural sciences brought forward another approach (Benton & Craib, 2001). For these scholars, truth consists of constructed meaning. Crotty (1998) defines constructionism: “All knowledge, and therefore all meaningful reality as such, is contingent upon human practices, being constructed in and out of interaction between human beings and their world, and 33 developed and transmitted within an essentially social context” (p. 42). The mind is actively involved in the research process under this paradigm. Crotty (1998) challenges

“Can there be meaning without a mind?” (p. 43). As active agents, individuals create their own interpretations of experiences. These understandings are not seen as static, but malleable with time (Schwandt, 2000). Reality is manifold; truth may be reinterpreted

(Crotty, 1998; Guba & Lincoln, 1994).

Social constructionism is a variant of constructionism highlighting the social and cultural context. This social structure is a common distinction between constructionism and constructivism (terms that are often used interchangeably). The former emphasizes the collective and social nature of meaning and meaning making and the later focuses on individual experiences and the internal dynamics of the mind (Crotty, 1998). In summation, the constructionist ontological nature of reality is relativist, or multiple rather than singular; epistemologically, subjectivity prevails in the co-construction of data and meaning in naturalistic environments (Denzin & Lincoln, 2011b; Guba & Lincoln, 2005).

Interpretivism is similar and quite nearly a parallel to constructivism. A deep focus on understanding and subjective meaning are the foundational qualities of interpretive thought (Macionis, 2015). It is a reference to an epistemology encompassing idiosyncratic understandings of meaningful social action (Neuman, 2011). Crotty (1998) tells us “The interpretivist approach…looks for culturally derived and historically situated interpretations of the social life-world” (p. 67). There are also links to other belief systems that are part of this larger umbrella term: , symbolic interactionism, and phenomenology (Bernard, 2006). 34

Postmodernism/post-structuralism. Postmodern thinking is a stark departure from other philosophies presented. The most cogent thoughts about this perspective have to do with the lack of clarity. “Whatever postmodern and post-structural mean these days, they are pervasive, elusive and marked by a proliferation of conflicting definitions that refuse to settle into meaning. Indeed, refusing definition is part of the theoretical scene” (Lather,

2001, p. 478-479). Postmodernism is a movement away from modernism, which is rational, logical, forward thinking, and scientifically based (Neuman, 2011).

Postmodernism “Commits itself to ambiguity, relativity, fragmentation, particularity, and discontinuity (Crotty, 1998, p. 185). Meaning is viewed as “tentative,” “provisional,”

“temporary,” and “contingent” (Crotty, 1998, p. 194) as opposed to being “generalizable” or able to accrue over time (Neuman, 2011, p. 118). Postmodernist thinking is the ultimate leveler; it does not privilege anything be it perspective, method, or representation. Beyond the typical written textual description, postmodernist tales may take many forms such as plays or poems (Neuman, 2011). There is a unique acknowledgement of the distance between life and even more creative representations of lived experiences. “We know that there is extensive slippage between life as lived and experienced and our ability to case that life into words that exhibit perfect one-to-one correspondence with that experience. Words, and therefore any and all representations, fail us” (Lincoln, Lynham, & Guba, 2011, p. 125).

While sometimes seen as synonymous with postmodernism, post-structuralism is considered a sub-concept. Post-structuralism is a variant of postmodernism associated with France and with the work of Foucault. The emphasis is on language and discourse.

Language is viewed as “an unstable system of referents, making it impossible to ever 35 completely capture the meaning or an action, text, or intention” (Denzin & Lincoln,

2011b, p. 16). Derrida, known for his thoughts on deconstruction and ideas under erasure, was another notable post-structuralist (Crotty, 1998). With Derrida, the focus was on absence, not presence, as meaning was deemed to exist just beyond reach (Benton &

Craib, 2001).

Transformative. Acknowledging the values and knowledge of marginalized communities is a primary assumption associated with the transformative paradigm, which differs considerably from others (Mertens, 2009; 2013). Recognizing power and focusing on social justice for the oppressed is of upmost importance. Assessing and understanding privilege, whether explicit or hidden is what must be done under the transformative paradigm. Reality is viewed as a social construction. Through cultural contexts (such as gender, ethnicity, disability) is how true meaning is gleaned. Ethics are paramount, especially given the special populations (Mertens, Bledsoe, Sullivan, & Wilson, 2010).

Some (Lincoln, Lynham, & Guba, 2011) may prefer to refer to use the broader and more well-known critical paradigm or even specific forms such as feminist theory (Hesse-

Biber, 2013) in place of the transformative paradigm. The transformative approach extends the work of the critical approach from commentary and absolutely demands forward progress and change.

Pragmatism. The pragmatic approach is seen as a practical system in which researchers begin with research questions and utilize the combination of methods that best fit (Bryman, 2006; Morgan, 2007; 2014). Biesta (2010) refers to pragmatism as

“research means to research ends” (p. 96). Tenets of pragmatism include multiple and eclectic blends of methods, which often result in contradictory paradigm combinations. 36

However, since the pragmatic philosophy is one of pluralism and compatibility, this integration is accepted. Thus there is a tolerance of a wide range of ontological and epistemological stances such as seeing knowledge as objectively discovered and subjectively socially constructed. Meaning and truth are based on action and experience and are only ever tentative. There is a strong preference for action and outcomes (Biesta,

2010; Johnson & Onwuegbuzie, 2004).

Unsurprisingly, the inclusive focus and flexibility have made this an extremely popular paradigm choice for mixed methods researchers (Greene & Hall, 2010). Some have gone as far as labeling pragmatism as a meta-paradigm (Sommer Harrits, 2011).

Johnson and Onwuegbuzie (2004) link mixed methods and pragmatism so closely that they refer to pragmatism as the “philosophical partner of mixed methods” (p. 16).

Moreover, pragmatism guides integration (Morgan, 2007). Despite its popularity, researchers, have been accused of misusing pragmatism. Researchers have purported to use pragmatism, when actually the research may be more aligned with positivist foundations. Others have utilized pragmatism as a placeholder and way to simply withdraw from the paradigm discussion altogether (Lincoln, 2010).

Dialetical stance. Dialetical pluralism is not a paradigm in the same way as the others described previously but is rather a way of utilizing paradigmatic stances (Greene,

2007). Dialetical thinking compels researchers to understand the philosophy behind research to use paradigms to inform research. More than paradigms are engaged in the inquiry process with a dialectical approach—methodologies, methods, and mental models are included in research dialogue (Greene & Hall, 2010, p. 124). Mental models are defined: 37

“A mental model is the set of assumptions, understandings, predispositions, and

values and beliefs with which all social inquirers approach their work. Mental

models influence how we craft our work in terms of what we choose to study and

how we frame, design, and implement a given inquiry. Mental models also

influence how we observe and listen, what we see and hear, what we interpret as

salient and important, and indeed what we learn from our empirical work”

(Greene, 2007, p. 12).

Researchers employing a dialectical stance, articulate a comprehensive mental model including research philosophy, guiding theories, influences from the field, educational context, methodological traditions, personal and professional experiences (Greene, 2007;

Greene & Hall, 2010). Mental models allow open space for exchange of ideas, where paradigms and pure philosophy may serve to shut down these fruitful interchange with their inherent differences. Instead, a dialectical stance can uncover new meaning through accepting multiple paradigms, actively working with and across contexts of difference to arrive at an improved understanding (Greene & Hall, 2010).

Mental model. As an exemplar, the mental model for this dissertation research is discussed. This dissertation research takes place within a context of a more expansive mixed methods RCT research study, with its own parameters and team of nursing researchers. Each member has her own personal beliefs and values regarding research and the phenomenon under investigation. As a methodologist, I have had my own personal research journey (see also Reflexivity found in Chapter 1).

A guiding research philosophy is another model element. Examining the larger

Hearts on Track study, no explicit research philosophy was attached to the project or 38 presented for the funding agency. Operating on educated assumptions between paradigms and methods, which have been largely detached from one another, as well as personal experience with the project, this researcher surmised a postpositivist position for the RCT portion of the mixed methods study and a constructionist position for the semistructured interviews, a common approach in mixed methods studies. Hypothetico-deductive reasoning and hypotheses and causal thinking as per the scientific method fit nicely within the postpositivist paradigm for the intervention and measures. A novelty in this

RCT was a focus on exercise behavior (behavior change specifically) as opposed to the more normative goal of health outcomes with exercise adherence studies with heart failure patients. Exercise adherence with this special population was a unique and first- of-its-kind behavior change trial. This was written as a specific “challenge to current research paradigms” section to the National Institutes of Health in the original grant application.

Social constructionism is well suited to study participants to learn more about the context for the phenomenon of exercise adherence from the open-ended interviews.

Under what conditions did they adhere? Exactly how does this process of adherence work for the participants (or not work)? There is a strong belief that participants have the answers and are “experts” at understanding what it is like to share this meaning and truth of their lived experience with the research team in a relationship of trust and confidence.

It is through the interview experience that participants reflect and make sense of their exercise experience with the interviewers.

Team stance. The Hearts on Track team of researchers had a penchant for objective knowledge. Empirical data are funded and published most readily in the field. 39

Nurses leading the larger study received their education and training typically with a strong emphasis on postpositivist learning, so it follows that this is where their experience and comfort lies. Research philosophy and paradigms were not discussed openly; however this dissertation project arose from my personal philosophical conflict with a task I was given as a part of my work on this project. It was deemed essential that I demonstrate validity with the interviews as a part of the overall fidelity of the study in a similar manner to treatment fidelity. This originally seemed utterly to be in conflict

(postpositivism/constructionism), much like the paradigm wars. Nevertheless, with more time, thought, and dialogue, avenues for integration would eventually emerge.

Theoretical frameworks were respected and utilized fully in this mixed methods project. For this intervention, cognitive-behavioral strategies were used as a guiding framework. A variety of effective strategies from the literature found to change physical activity behavior were employed in this study as well: goal setting (Albright, Pruitt,

Castro, Gonzalez, Woo, & King, 2005; Eakin, Bull, Riley, Reeves, McLaughlin, &

Gutierrez, 2007), self-monitoring (Carels, Darby, Cacciapaglia, & Douglass, 2004;

Kumanyika, Shults, Fassbender, 2005; Perry, Rosenfeld, Bennett, & Potempa,

2007;Yancey, McCarthy, Harrison, Wong, Siegel, Leslie, 2006), frequent and prolonged contact (Appel, Champagne, & Harsha, 2003; Marcus, Napolitano, & King, 2007;

Toobert, Strycker, Glasgow, Barrera, & Angell, 2005), feedback and reinforcement

(Albright, Pruitt, Castro, Gonzalez, Woo, & King, 2005; Eakin, Bull, Riley, Reeves,

McLaughlin, & Gutierrez, 2007; Marcus, Bock, Pinto, Forsyth, Roberts, & Traficante,

1998; Marcus, Napolitano, & King, 2007), self-efficacy enhancement (Albright, Pruitt,

Castro, Gonzalez, Woo, & King, 2005; Marcus, Bock, Pinto, Forsyth, Roberts, & 40

Traficante, 1998; Perry, Rosenfeld, Bennett, & Potempa, 2007), modeling (Jeffery, Wing,

Thorson, & Burton, 1998), and problem solving and relapse prevention (Carels, Darby,

Cacciapaglia, & Douglass, 2004; Eakin, Bull, Riley, Reeves, McLaughlin, & Gutierrez,

2007; Green, McAfee, Hindmarsh, Madsen, Caplow, & Buist, 2002; Jacobs, Ammerman,

Ennett, 2004.).

Personal stance. As far as my personal stake in the mental model, I come to this research project as a research consultant and methodologist from the social sciences, a discipline which can seem both near and far from the medical field. My educational background is in the fields of family sciences and psychology. Concepts such as self- efficacy have different meanings across these disciplinary divides, and I found that I had to read the literature to understand this context more clearly in the medical context.

My personal philosophical home is a transformative perspective. I have a partiality for uplifting and empowering those who are most in need. I also appreciate the spirited call to action. This is another reason I like medical research. The other is a personal reason. In the last several years I have been afflicted with several chronic medical conditions. This only makes me more determined to help others with medical conditions.

I am fond of Greene’s (2007) dialectical stance in combining multiple paradigms and other facets of research together to come to a full understanding. I feel I have been working toward this with my doctoral program. The words of one of my favorite researchers summarize the flexibility I believe all researchers should possess, but especially mixed methods researchers.

“Be a good craftsman: Avoid any rigid set of procedures. Above all, seek to 41

develop and use the sociological imagination. Avoid the fetishism of method and

technique. Urge the rehabilitation of the unpretentious intellectual craftsman, and

try to become such a craftsman yourself. Let every man be his own

methodologist; let every man be his own theorist; let theory and method again

become part of the practice of a craft” (Mills, 2000, p. 224).

Reflecting epistemologically, I think we can learn a great deal directly from others if we give them the chance and take the time. I am always grateful when others share this great gift of their time. Interviews, as a research tool, are powerful. I have respect for methodologies and methods across the spectrum, both those producing quantities and qualities.

Research Validity Context

Validity is a fundamental yet fraught concept surrounding research fidelity. It is not easily distilled down into one pure form, but rather it exists in many forms across all phases of the research process (Winter, 2000). Generally, validity speaks to accuracy, reliability to consistency (LoBiondo-Wood & Haber, 2013). Validity, reliability, and generalizability have been considered by some as the “scientific holy trinity…to be worshipped with respect by all true believers in science” (Kvale, 1995, p. 20). As a construct, research validity resonates across the spectrum of inquiry. Whether one utilizes quantitative, qualitative, or mixed methods research processes, each has an amazing amount of variety of interpretations of validity and reliability of the meaning, terminology, history, paradigmatic relevance, and strategies for achieving these constructs. Each is discussed in turn. 42

Quantitative research context. The search for validity began with objectivity and truth or more generally speaking, “the accuracy and truthfulness of the findings”

(Altheide & Johnson, 1994, p. 487). Validity, and reliability, harken back to the historical heritage of the scientific method (Koch & Harrington, 1998). Validity as an established term is identified most strongly with measurement research with its stringent rules for the creation and testing of instruments with various populations (Hammersley, 2008a; Jonson

& Plake, 1998; Messick, 1989; Onwuegbuzie, Daniel, & Collins, 2009; Stenbecka, 2001).

Generally, validity is a necessary condition for research to proceed. Utilizing a measurement with high validity is vital. However, validity properties from previous studies and populations do not transfer; so, validity must be established anew (Selltiz,

Wrightsman, & Cook, 1976). Several forms of testing validity have been offered: construct, content, predictive, concurrent (Cronbach & Meehl, 1955). Subsequently, criterion validity overtook predictive and concurrent (Hubley & Zumbo, 1996). A unified view of validity as construct validity arose that subsumed content, criteria, and consequential within one framework for the testing of hypotheses (Messick, 1995).

Modern guidelines for testing validity are offered (DeVon et al., 2007).

Validity is also aligned and associated with experimental research. Hammersley

(2008a) underscored the salience of generalization and controlling variables within the experimental context. Threats to the internal validity of randomized experimental and quasi-experimental designs have been identified and carefully articulated: history, maturation, testing, instrument decay, regression, selection, and mortality (Borg & Gall,

1989; Campbell, 1957; Campbell & Stanley, 1963). Since this time, additional ideas have come to the fore: selection interactions, uncertainty about the direction of causal 43 influence, diffusion or transmission of treatments, equalization of treatment and control groups, rivalry among respondents receiving less preferred treatments, and discouragement among respondents receiving less preferred treatments (Cook &

Campbell, 2004). Others have expanded further on these concepts (Bracht & Glass, 1968;

Onwuegbuzie, 2003). A recent review of social science researchers found validity and reliability remain vital criteria for quality, less important were replicability and generalizability. Others suggested standards include clarity in procedures, alignment of research methods and research questions, comprehensibility of statistics, and significance of findings (Bryman, Becker, & Sempik, 2008).

Qualitative research context. Validity with qualitative data is a muddled landscape. Validity and reliability have gone through many changes over time paralleling and reacting to the overall changes in the naturalistic research community (Lewis, 2009).

There exists a delicate balance of maintaining rigor without sacrificing relevance

(Sandelowski, 1986). All the while, pressure exists for the field to “‘get our act together’ and move qualitative research from the accusations of the ‘fantasy’ phase to the more concrete phase of accepted reality” (Morse, 1994, p. 3). Clarifying the understanding of validity in an interpretive research context allows for transcendence of the concise, yet cogent definition of mere truth. Schwandt (2001) defines validity as:

“one of the criteria that traditionally serve as a benchmark for inquiry. Validity is

an epistemic criterion: To say that the findings of social scientific investigations

are (or must be) valid is to argue that the findings are in fact (or must be) true and

certain. Here, ‘true’ means that the findings accurately represent the phenomena

to which they refer and ‘certain’ means that the findings are backed by 44

evidence—or warranted—and there are not good grounds for doubting the

findings, or the evidence for the findings in question is stronger than the evidence

for alternative findings” (p. 267).

For a compilation of many early naturalistic definitions, see Hammersley (1987).

Validity has been known by many terms: adequacy, artfulness, attentiveness, auditability, authenticity, awareness, believability, carefulness, coherence, confirmability, congruence, conscientiousness, consistency, context, creativity, credibility, criticality, descriptive vividness, empathy, engagement, explicitness, fittingness, goodness, honesty, integrity, openness, plausibility, reflection, reflexive accounting, relevancy, respect, rigor, sensitivity, soundness, thoroughness, transparency, trustworthiness, truth value, truthfulness, validity, validation, and vividness (Altheide & Johnson, 1994; Bryman et al., 2008; Burns & Grove, 2005; Creswell, 2013; Davies & Dodd, 2002; Emden &

Sandelowski, 1998; Fossey, Harvey, McDermott, & Davidson, 2002; Guba & Lincoln,

1989; Hall & Stevens, 1991; Koch & Harrington, 1998; Koro-Ljungberg, 2008; Lather,

1993; Lincoln & Guba, 1985; Mishler, 1990; Morse, 1994; Morse, Barrett, Mayan,

Olson, & Spiers, 2002; Sandelowski, 1986; 1993; Slevin & Sines, 2000; Streubert, 2013;

Tatano Beck, 1993; Whittemore, Chase, & Mandle, 2001).

Validity exists in a plethora of forms and models—catalytic, construct, descriptive, ecological, evaluative, excellence, face, feminist, generalizable, interpretive, ironic, judge panel, paralogical, pragmatic, rhizomatic, synthesis, theoretical, transactional, transformational, and voluptuous (Avis, 1995; Brink, 1991; Britten, Jones,

Murphy, & Stacy, 1995; Cho & Trent, 2006; Kuckelman Cobb & Nelson Hagemaster,

1987; Ezzy, 2002; Forchuk & Roberts, 1993; Hall & Stevens, 1991; Koelsch, 2013; 45

Kvale, 1995; Lather, 1986; 1993; Maxwell, 1992; Mays & Pope, 1995; Neuman, 2006;

Polit & Tatano Beck, 2014; Popay, Rogers, & Williams, 1998; Secker, Wimbush,

Watson, & Milburn, 1995; Slevin & Sines, 2000; Tracy, 2010; Whittemore et al., 2001).

Validity lacks consensus. It has had a contentious history because of its origination and deliberate break from the quantitative, objectivist research tradition

(Lewis, 2009) or psychometric perspective (Janesick, 2000). Interpretive researchers began by taking their cues for quality from logical positivism (Emden & Sandelowski,

1998; LeCompte & Goetz, 1984). Validity was conceptualized broadly so the same criteria could be used (Hope & Waterman, 2003; Sparkes, 2001). Interpretive researchers alleged differences among inquiry paradigms, the positivist/experimental and the ascending constructionist/critical/postmodern paradigms. Understanding research to be based on different ontological worldviews and epistemological beliefs, distinct standards were deemed necessary to distinguish and embody validity for those utilizing qualitative approaches (Bailey, 1996; Leininger, 1994). How would an interpretivist substantiate knowledge claims? What would serve as evidence? Lincoln & Guba (1985) translated qualitative ideals and assumptions for the naturalist, advancing the idea of trustworthiness: validity became credibility, external validity became transferability, reliability became dependability, and objectivity became confirmability. Authenticity has since been added and explicated (Guba & Lincoln, 1989; 2011). This qualitative alternative provided another or parallel way of conceptualizing validity (Sandelowski,

1986; Sparkes, 2001). For some, however, these ideas were too closely aligned with the traditional research context (Maxwell, 1990). Yonge and Stewin (1988) denounced validity and reliability altogether as suitable terminology. 46

Before and since, many researchers have offered their own take on validity criteria or standards: aesthetic merit, analytic induction, audit trail, bracketing, codebook, collaboration with participants, comprehensive data treatment, constant comparison, crystallization, data quality, design issues, disconfirming evidence, field notes, flexibility, impactfulness, member checking, memoing, negative case analysis, peer debriefing, persistent observation, prolonged engagement, purposeful and theoretical sampling strategies, rapport, recurrent patterning, reflexivity, rich data, saturation, taping/transcription, thick descriptions, triangulation, using numbers, and voice to mention a few ideas (Appleton, 1995; Cho & Trent, 2006; Creswell & Miller, 2000;

Eisenhart & Howe, 1992; Fitzpatrick & Boulton, 1996; Fossey, Hall & Stevens, 1991;

Harvey, McDermott, & Davidson, 2002; Horsburgh, 2003; Koch & Harrington 1998;

Koelsch, 2013; Lewis, 2009; Lincoln & Guba, 1985; Lather, 1986; Leininger, 1994;

Lincoln, 1995; Long & Johnson 2000; Maxwell, 2013; Mays & Pope, 1995; Morse et al.,

2002; Onwuegbuzie & Leech 2007; Padgett, 2012; Polit & Tatano Beck, 2014; Popay,

Richardson, 2000; Roberts & Priest, 2006; Rogers, & Williams, 1998; Sandelowski,

1986; Sandelowski & Barroso, 2002; Seale, 1999a; 1999b; Silverman, 2000; Slevin &

Sines, 2000; Thomas & Magilvy, 2011; Whittemore et al., 2001). Lincoln (1995) offered emerging criteria for quality in qualitative inquiry rooted in ethics such as positionality, community, voice, critical subjectivity, reciprocity, sacredness, and sharing the perquisites of privilege.

Despite their ubiquity in the literature, strategies are not seen as a panacea for research quality by all (Garratt & Hodkinson, 1998; Hammersley, 2008b; Lincoln, 1995).

In fact, some see strategies as problematic (Rolfe, 2006; Sandelowski, 1993). Emden and 47

Sandelowski (1999) suggest a “criterion of uncertainty” and remind researchers to interrogate and deconstruct criteria by asking “Whose criteria? Criteria for what? and,

Why criteria at all?” (p. 6). Hammersley (2008b) saw validity as existing on a spectrum from amorphous features about what could be considered quality to specific, discrete criteria. Avis (1995) recommended a move away from technical criteria. Perhaps some small steps in this direction is the work of Smith (1990) who proposed an ever-changing running inventory of ideas built upon praxis. Later, Smith and Deemer (2000) expanded this conception again by advancing no particular criteria but instead challenging researchers to think about the “features…that characterize good versus bad inquiry” (p.

894). They advocated the formation of a repository of these features that could be adapted with time and experience. Cho and Trent (2006) took a process view of validity advocating a move away from “the right criteria at the right time” to “thinking out loud about researcher concerns, safeguards, and contradictions continually” (p. 327). Many others agreed with the tenet of being vigilant about validity through the research process to remain flexible and modify the study as necessary (Lewis, 2009; Morse et al., 2002;

Sparkes, 2001; Whittemore et al., 2001).

The vacuum left by the lack of criteria is not easily filled with other notions.

There are far reaching designs for validity beyond criteria, some easier to conceptualize and implement. Atkinson (1995) provided an argument against the one-size-fits all criteria when he railed against the lack of consensus for one philosophical paradigm or methodology within qualitative research. With pluralism as the status quo in interpretive research, it has become impossible to utilize one set of standards. Seale (1999b) suggested a middle ground of “intense methodological awareness” rather than “complete 48 anarchy or strict rule following” (p. 33). Maxwell (1992; 2013) advocated a fluid and contextualized understanding of validity and the methods used to ascertain it. Taking this relative view, validity was dependent upon the available data, interpretations, contextual conditions, and overarching purpose of the study. Here the methods or strategies merely served to provide evidence for validity but can never truly guarantee it. Validity is contingent. Denzin and Lincoln (2011b) echo this sentiment “no method can deliver on ultimate truth…no one would argue that a single method—or collection of methods—is the royal road to ultimate knowledge” (p. 178). Winter (2000) claims validity is inherent within each methodology and the “means by which this is to be achieved are different for each methodology” (p. 10). Accordingly, this leads away from general quality measures and into methodologically specific benchmarks (Lincoln, 1990). Creswell (2013) composed a distinct set of evaluative standards for five methodologies—grounded theory, narrative, , phenomenology, and case study. Peräkylä (1997) and Poland

(2003) offer particular validity advice for collecting and handling data in the form of transcripts and tapes, which is of particular appeal for interview data. Creswell and Miller

(2000) present still another view returning to epistemology as a standard of rigor.

Philosophy is a rigorous and solid grounding for decisions regarding quality (Koch,

1996). As Avis (1995) purports, “The strength of research evidence is only as good as the epistemology from which it derives” (p. 1208).

Another understanding of validity is interpersonal and relational, with the researchers using their skills and ethics to ensure the research is of value rather than adhering to rigid standards and techniques (Davies & Dodd, 2002; Lincoln, 1995; Trainor

& Graue, 2013). With this view, validity can be seen as a context specific social 49 construction with multiple and diverse viewpoints (Sparkes, 2001). Seidman (2006) provided an example of the researcher as instrument and the participant as collaborator in a co-constructed reality.

“There is enough in the syntax, the pauses, the groping for words, the self-

effacing laughter, to make a reader believe that she is grappling seriously with the

question of what student teaching means to her, and that what she is saying is true

for her at the time she is saying it. Moreover, in reading the transcript, we see that

the interviewer has kept quiet, not interrupted her, not tried to redirect her

thinking while she was developing it; so her thoughts seem to be hers and not the

interviewer’s. These are her words, and they reflect her understanding of her

experience at the time of her interview” (p. 25).

Hammersley (2008b) emphasizes the power of human agency in validity and reliance on judgment. “Learning from our own experience, and from one another” through continual reflection gives weight and legitimacy to research (p. 160). Plausibility, credibility, evidence, and the type of claim (simple description or complex theory) also factor into the decision (Hammersley 1992; 1998). Extending the thread of validity further,

Sandelowski (1993) reminds us not to focus on the rules but rather on the “artfulness, versatility, and sensitivity to meaning and context that mark qualitative works of distinction…soften our notion of rigor to include the playfulness, soulfulness, imagination, and technique we associate with more artistic endeavors” (p. 1, 8). Within an interpretive context, validity is not assumed to be predicated on reliability. Instead, reality is open and multiple and co-constructed and equal rather than closed, singular, and ready to be discovered (Koch, 1996; Seidman, 2006). Here, art and science of quality 50 inquiry are valued (Polit & Tatano Beck, 2014; Thorne, 1997). Richardson (2000) emphasizes this point further “Science is one lens, creative arts another. We see more deeply using two lenses” (p. 16). Seale (1999a) considers research a “craft skill” indicating a niche realm of expanding knowledge predicated on experience while still allowing for some beauty in the process (p. 472). Others concur with this sentiment of interviewing as an artistic trade, perfected with time and experience (Bernard, 2006;

Kvale & Brinkmann, 2009). Interviewing skills are seen as vital in the health sciences

(Secker et al., 1995) as in the broader social science disciplines. From nursing, Cobb and

Hagemaster (1987) refer to expertise in evaluating quality. Room in the postmodern realm enable some to find “literary, poetic, and artistic, forms of judgment” (Lenzo,

1995; Sparkes, 2001, p. 548).

Farther on the fringe, some cast aside the construct of validity entirely in what has been called validity corrosion (Kvale & Brinkmann, 2009; Wolcott, 1990). It is also plausible that validity has been avoided altogether due to the lack of consensus of terms or the philosophical understanding of some types of research no warranting or considering validity a necessary construct (Fitzpatrick & Boulton, 1996). Still others have chosen a laser-sharp focus on validity with an intense need to validate their research resulting in a cyclical process of perpetual validation by adhering to rigid principles known by many names (Kvale, 1995): criteriology (Schwandt, 1996), methodolatry

(Janesick, 2000), or the “danger of becoming doctrinaire” (Seidman, 2006, p. 26).

In evaluating the validity of qualitative data, disagreement exists over who bears responsibility. Morse et al. (2002) emphasize the researcher role as paramount.

Participatory consults the co-participants themselves (Brydon-Miller, 51

Kral, Maguire, Noffke, & Sabhlok, 2011). Others allow the reader to determine the merit of the work (Koch & Harrington, 1998; Rolfe, 2006; Sandelowski & Barroso, 2002;

Whittemore et al., 2001). Porter (2007) takes a stance that both the researcher and reader are vital to the validity process as does Koch (1996) with the researcher leaving and the consumer following the trail of decisions. Sandelowski and Barroso (2002) highlight the written research report as a “dynamic vehicle that mediates between researcher/writer and reviewer/reader” (p. 309). Who decides that validity is an important construct to assess?

Certainly researchers consider their work valuable and seek to demonstrate its worth.

However, the assessment pressure is primarily external, originating from policymakers, funders, and practitioners, audiences who may not even be equipped to make

(Forchuk & Roberts, 1993; Hammersley, 2008b; Seale, 1999b). Seale (1999b) aptly describes the cyclical nature of the naturalistic quality problem. The continual need to create some standards to satisfy audiences exists, all the while no methods or criteria stick, and so more are created.

It is unsurprising that the research literature generally lacks exemplars of interpretive validity. Boulton and Fitzpatrick (1996) uncovered a lack of published evidence of validity in their review of the health sciences. Only about 7% of studies reported qualitative validity claims. Yet even in this bleakness, Morse (1999) pleads with qualitative researchers to reconsider their abandonment of validity.

“To state that reliability and validity are not pertinent to qualitative inquiry places

qualitative research in the realm of being not reliable and not valid. Science is

concerned with rigor, and by definition, good rigorous research must be reliable

and valid. If qualitative research is unreliable and invalid, then it must not be 52

science. If it is not science, then why should it be funded, published,

implemented, or taken seriously?” (p. 717).

Mixed methods research context. Validity within the mixed methods realm has some issues that transfer over from single or monomethod designs and some that are unique to the integration of multiple methods within the mixed methods framework

(Creswell & Plano Clark, 2011; Giddings & Grant, 2009). Early validity research in mixed methods inquiry focused on utilizing a combination of both quantitative and qualitative strategies, often distinct criteria for each strand (Bryman, 2006; Bryman et al.,

2008; O’Cathain, 2010; Johnson & Christensen, 2004; Sale & Brazil, 2004; Teddlie &

Tashakkori, 2009). Some still carry forward standards intended for quantitative data and apply these across into the textual data (Zohrabi, 2013).

Terminology is varied in the literature but does not approach the breadth as with qualitative data: inference quality, legitimation, quality, rigor, validity, & validation

(Creswell & Plano Clark, 2011; Giddings & Grant, 2009; O’Cathain, 2010; Onwuegbuzie

& Johnson, 2006; Teddlie & Tashakkori, 2009). Onwuegbuzie and Johnson (2006) put forward the novel term of legitimation as a middle ground approach between quantitative and qualitative research communities. Utilizing a new term without history of an ingrained concept had the advantages of a fresh start within the mixed methods field.

Legitimation has been conceptualized as a continuous, iterative process. This constant process plays out at all phases of the research process (Creswell & Plano Clark, 2011).

Even with this diligence, it is possible to never attain legitimation or inference closure

(Onwuegbuzie & Johnson, 2006). 53

Several authors offer frameworks or standards for mixed methods quality or legitimation (Bryman et al., 2008; Caracelli, 1994; Collins, Onwuegbuzie, & Johnson,

2012; Dellinger & Leech, 2007; Mertens, 2013; Morse & Niehaus, 2009; O’Cathain,

2010; O’Cathain, Murphy, & Nicholl, 2007; 2008; Onwuegbuzie, & Johnson, 2006;

Onwuegbuzie, Johnson, & Collins, 2011; Teddlie & Tashakkori, 2009). There is no consensus as to what the appropriate standards for mixed method legitimation should be at present (Collins, Onwuegbuzie, & Johnson, 2012). Andrew and Halcomb (2009) appeal, “How can we conduct mixed methods research and retain rigour?” (p. 218).

Applying any such strategies during the course of research is difficult to manage, particularly with complex mixed methods designs with several strands (Collins,

Onwuegbuzie, & Johnson, 2012). Moreover, like with qualitative data, there are many paradigmatic stances and ways to bring together paradigms in mixed methods that it becomes difficult to utilize a singular set of criteria to encompass all under one umbrella.

Quality is related to philosophical assumptions, which trickles down to other decisions

(Mertens, 2013). There is credence in the ability of mixed methods to produce increasingly valid findings through the abundance of data procured (Abowitz & Toole,

2010).

Validity Summary

Validity exists in many forms. Validity is dependent on ontological beliefs about reality and truth and epistemological means to attain data. For instance, with research investigations based on the postpositivist paradigm, utilizing quantitative data may rely on a priori research tools and procedures to safeguard against, search for, and eliminate concerns. Qualitative data may be associated with any number of a diverse assortment of 54 research paradigms from postpositivism to transformative or beyond. Validity with qualitative data is as manifold as are the epistemological solutions for such a challenge.

Mixed methods researchers utilize quantitative and qualitative data and multiple research paradigms and thus must consider validity from each of its component parts as well as assess the unique contributions of the whole.

Chapter Summary

The literature reviewed for this dissertation featured a diverse array of subject areas, all of which differentially, yet vitally contributed to this dissertation project. To begin, RCTs and treatment fidelity offered frameworks from which to view and consider the phenomenon of interview fidelity. Next, interpretive interview methods were reviewed within the context of the type of interviews utilized in the illustrative study to provide enhanced understanding of the conduct and process of the exemplar interview experience. A detailed accounting of the researcher and team mental model followed.

Within this space, the researcher presented her subjectivities related to the project, educational training, individual and team philosophical beliefs, personal experiences with illness, and disciplinary distinctions between health and social science. Research paradigms were described starting from definitions, through periods of contention, salience, and specific inquiry pathways. Finally, the validity landscape across various forms of data was explored.

Bridging the literature. Research necessitates validation. No research is aparadigmatic. Mixed methods research compels researchers to understand and utilize multiple paradigms for the principle purpose of mixing or integration. Rather than rely on the early philosophical reasoning of Kuhn’s scientific revolutions, which considered 55 belief in more than one research paradigm at once an impossibility, dialectical pluralism promotes the contemplation of multiple research philosophies for their contributions to the research endeavor. Thus a review of research paradigms was necessary. The exemplar mixed methods intervention trial study brought together a postpositivist RCT with interpretive interviews in order to more wholly understand the phenomenon of exercise adherence among a population of heart failure patients. Therefore, the context of RCTs were explored and the interview process was detailed. Validity was examined to capture the construct of quality for interview fidelity.

56

CHAPTER 3: Methods

The interviews that are the focus of this dissertation took place within the qualitative strand of the mixed methods RCT study on exercise adherence among heart failure patients. Ensuring these interviews were conducted with high fidelity across interviewers and sites was the goal of this dissertation. To restate, the purpose of this dissertation was to illustrate a meaningful dialogue about research validity in the context of interviews, interview fidelity. These fidelity constructs were created from a larger project, a longitudinal RCT study of exercise adherence among heart failure patients.

Research questions guiding this study about interview fidelity include the following:

1. What are the interview fidelity ideals for Hearts on Track study?

2. How can these interview fidelity constructs be operationalized across multiple

paradigms (e.g. postpositivism and constructionism)?

3. How did this researcher navigate between two philosophical perspectives to arrive

at a negotiated path for validity or interview fidelity?

4. What is added to the concept of interview fidelity from treatment fidelity

standards?

5. What is added to the concept of interview fidelity from the validity literature?

Research Design

This dissertation research advances the concept of interview fidelity as a method for improving the methodology of interviews. In order to arrive at this fidelity tool, the researcher detailed the inquiry process as a part of this research project in narrative form.

The research process and inherent examples functioned as a model of how interpretive interviews were conducted within the constraints imposed by the philosophical bounds of 57 an RCT. The researcher provided an in-depth discussion of several important themes to address the research questions. Decision points and areas of struggle were detailed during the process as well as the resultant outcome project solutions. Upon reflection, the researcher negotiated workable positions through compromise among the philosophical perspectives. The researcher used a matrix to work out drafts of the interview fidelity ideals and distill down ideas in a workable narrative format. An example matrix is provided (see Appendix A).

To address the research questions posed in this study, the researcher drew upon several areas in this methodological dissertation in order to arrive at interview fidelity ideals: qualitative analysis methods, dialectical and epistemological pluralism, treatment fidelity framework, and the overall validity literature. The rationale for each component and how it contributed to the analysis is discussed in turn. First, qualitative methods of open coding were utilized to contemplate and analyze documents and interviews in the software MAXQDA. Open, descriptive coding was performed on the available data (see

Table 3.1). From these data, 297 coded segments and 36 codes were produced (see

Appendix B). From these codes, a code list was formed consisting of four themes and four sub-themes. These were reconceptualized to become the five interview fidelity ideals: Research contributions, interviewer-participant association, participant accommodation, process and procedures, and data management dimensions. Each is elaborated subsequently (and in Appendix C).

Next, intentional paradigmatic turbulence or space was created to allow for innovation and creation to enter through this gap with regard to fidelity (Hesse-Biber &

Johnson, 2013). The researcher engaged in thoughtful listening across research 58 boundaries (Guba, 1990; Hesse-Biber & Johnson, 2013). In the confusion and conflict of many perspectives, collaboration and clarity can emerge (Garratt & Hodkinson, 1998;

Hesse-Biber & Johnson, 2013; St. Pierre, 1997). Theoretically based reasoning for methods decisions provided solid grounding (Silverman, 2006). Each interview fidelity ideal was considered both in terms of the postpositivist epistemology and ontology of the overarching randomized controlled trial and the constructionist interpretive interview context. Here, a middle approach between the loftiness of philosophy and the groundedness of methods was sought. Pragmatic descriptions were included from the example study to illustrate the constructs. The interview fidelity ideals proposed are both examples and exemplars.

Finally, validity and intervention fidelity literature was reviewed for relevance in conceptualizing interview fidelity ideals and best practices. These concepts were referenced to build a complete understanding of interview fidelity in this context.

Intervention trial literature was assessed and reimagined for possible additions to research rigor of interviews. Validity in all research contexts was consulted. The researcher (A.

Garrett) described the striving and struggle for valid interpretive interviews within the context of an RCT using examples taken from the contextual study on exercise adherence. What was described is more process than product.

Site and Participants

Data were collected from two cities with linked university systems and hospitals from January 2013 through April 2016. These sites were the University Medical Center in Meadowville, and State University in Clarkstowne. These sites were chosen for their capacity to recruit and enroll participants with heart failure in the time period determined 59 by the larger RCT study. Meadowville and Clarkstowne were paired due to a previous working relationship with principle investigators. One investigator in Meadowville had access to a largely ethnically homogenous population at a heart clinic. Clarkstowne had access to and experience with underserved and ethnic minority populations. Together, these sites better approximated the racial diversity of the nation as a whole.

This example study does not have participants in the traditional sense in this methodological project. Participants were not recruited or consented for this dissertation.

No participants were interviewed or observed. Interviewers were already working as personnel for the Hearts on Track research project and participants attempting to adhere to exercise were already enrolled in the study. Instead, this research represents a systematic overview and analysis of the interview process.

The researcher (A. Garrett) was the supervisor of all the qualitative tasks associated with the larger project, including the interviews. The interviewers for the

Hearts on Track project were the closest to participants. There were three interviewers, one at the Meadowville site and two at the Clarkstowne site. These interviewers were selected based on previous work experience for their respected sites. The interviewers matched the overall patient population on several dimensions, chiefly in terms of age and race. Interviewers were all older adults (older than 45); all were women (Wenger, 2003).

Two were white, one African American. These nurses had varying levels of medical experience and education, and all understood basic health concerns when participants spoke about medical issues.

These interviewers were trained through an initial workshop and their skills were maintained in booster instructional sessions occurring at approximately equal intervals 60 every six months throughout the duration of the study. The two sites coordinated via technology (email, phone bridge, IP video chat, wiki) for various types of training and communication (reminders, discussions, or training sessions), which aided in bridging the physical distance across the two sites. There was an open line of personal communication between the researcher and interviewers for issues as they arose.

Data Collection

Data consisted of grant and personal documents, interviews, and literature as shown in Table 3.1. Existing materials were extracted and organized.

Table 3.1

Data Sources

Author(s)

Researcher Interviewers Participants Research Research

Source Team Community

Interviewer Training Manual X

Booster Session Materials X

Interviewer Notes X

Interview Protocols X X X

Personal Notes & Memos X

Email Correspondence X X

Quantitative Validity Research X

Qualitative Validity Research X

Mixed Methods Validity Research X

Documents. Written materials such as the interviewer training manual (Appendix

D), booster training session materials, interviewer notes, interview protocols, personal notes, materials stored on internet drives, and prior email correspondence are included in 61 the documents for this project. These materials were shared with committee members as evidence of their value in the analysis through a personal link on Dropbox (Dropbox

Pro).

Interviews. Semi-structured interviews conducted by interviewers with Hearts on

Track participants, in transcript and audio form, also served a role. These participant interview files were studied by the researcher in preparation for trainings. Topics that were covered in booster sessions were the direct result of information gleaned from the review of these materials. A subset of total interviews was transcribed as a part of the grant. The researcher reviewed transcripts when available and listened to audio files when no transcript existed or as an additional form of data. The researcher had access to these materials, which circumvented any gatekeeper issues; data management of these files was a regular task as the qualitative and mixed methods specialist for the grant.

This role of gatekeeper and supervisor could potentially raise insider ethical threats of carrying out research in a place that is too near, such as a workplace. However, it is precisely because of serving in this role that caused some tension for the researcher and confusion as to the best possible course for quality for the qualitative data within the boundaries of the overarching RCT. It was the resulting ideas from this tension that are being brought forward in this research. Ethical issues that arose during the research process will be detailed in the results of this dissertation.

Literature. Literature on research validity was surveyed to bolster the method of interview fidelity. This literature was mined for additional ideas that could be implemented as specified or with some alterations. The philosophical frameworks of each 62 interview fidelity ideal were articulated and all research paradigms at work were fully contextualized.

Data Analysis

The analysis did not follow a particular methodological tradition but rather worked to create and demonstrate the process of a rigorous appraisal of the interview method within a mixed methods intervention trial project, interview fidelity. The approach to data analysis was informed by the researcher’s training and past experiences as a qualitative and mixed methods researcher as well as by involvement with the larger research study.

The artifacts (such as booster session agendas (see Appendix E), interviewer emails, and notes taken by interviewers during interviews) and participant interview transcripts were subjected to open coding where the data were fractured apart and then restored (see Appendix B). A qualitative analysis using descriptive coding was conducted

(Saldaña, 2013). Following this, larger abstract categories were built out of the documents and interviews relating to interviews (Hatch, 2002). The analysis process was conducted by utilizing MAXQDA 10 software package (VERBI GmbH, 2015). Data were analyzed to form ideas regarding interview fidelity. Attention was paid to the different and multiple perspectives of the interviewers and the issues and concerns each raised. The researcher and interview supervisor also reviewed personal notes and reflected on project experiences. These themes became the interview fidelity ideals brought forth and discussed.

Through dialectical pluralism, multiple research paradigms can be embraced simultaneously. Epistemological pluralism offers much through 1) acknowledging many 63 ways of knowing, 2) increasing understanding by way of integration, 3) demanding negotiation among differing epistemologies in their , and 4) working collaboratively to discover ways to adapt to others epistemologies (Miller & Erickson,

2006; Miller, Baird, Littlefield, Kofinas, Chapin, & Redman, 2008). Thus, the theoretical underpinnings behind the RCT and the interviews can communicate so that the

“epistemological sovereignty” of one belief does not result in a role of increased power and entitlement placing the other in a subservient oppressed position (Healy, 2003).

Several paradigms can be respected by considering how each can contribute to the interview form. In the space between paradigms, ways of taking parts from each and considering and combining their epistemological elements to find ways of holding on to foundational features of each while letting go of others was discovered. Heterogeneity in epistemological reasoning was explored. Interview fidelity constructs were envisioned across the postpositivist RCT landscape as well as the constructionist interpretive interview landscape to glean the full mixed methods account (see Appendix A).

The intervention fidelity framework was mined for ideas that might transfer to an interview fidelity situation. These constructs were searched for in the data. Intervention fidelity was also part of the coding process and resultant code list. Fidelity constructs were reimagined as four phases: design, training, maintenance, and phase.

The vast validity literature also served to assist the analysis through the contemplation of useful ideas for researchers conducting quality interpretive interviews within various contexts. All of these data points and their resulting patterns formed the interview fidelity ideals.

64

Validity

This study is an advanced discovery of validity across paradigms, methodologies, methods, and phases of research for interviews. To conduct quality research is to thoroughly address validity. The researcher has undertaken this particular project to delve into validity more deeply. The researcher sought transparency throughout the research process. Raw data and data analysis files were shared with my committee. Multiple forms of data were gathered and analyzed (Creswell, 2013). Researcher reflexivity and mental models were detailed. Personal and project ethical concerns have been articulated.

Furthermore, strategies from the literature for addressing validity for all types of research and at every phase of research were reviewed and in several cases were an implemented as part of this dissertation. Since the project is focused on validity, this researcher was especially attuned to these matters.

Chapter Summary

The elements that contributed to the interview fidelity design were detailed in this chapter. The study began with a qualitative analysis that consisted of open and descriptive coding of the documents in order to arrive at the themes or main issues on which to focus our attention when examining interview fidelity. Next, each theme was considered from the vantage point of postpositivist and constructionist philosophies, allowing creative thinking regarding the interview theme to occur. Along the way, the intervention fidelity framework literature scaffolded the themes and provided direction on the appropriate manner and timing of these interview fidelity concepts within the research process. Relevant validity literature was added to the interview fidelity construction 65 where appropriate. In the following chapter, interview fidelity as a process for and tool for research are discussed.

66

CHAPTER 4: Results

Drawing a strong connection between the methodology to the phenomenon has been lauded as a path to valid results for mixed methods research (Hesse-Biber, 2010) and for interview research (Mason, 1996). The larger mixed methods research study from which this dissertation was based, used both the traditional methods of treatment fidelity and the newly created interview fidelity tool to ensure validity.

Interview fidelity was established by drawing on qualitative coding, paradigmatic dialogue, intervention fidelity frameworks, and validity literature.

Format

The format of the interview fidelity ideals in narrative prose rather than a checklist was intentional and multifaceted. The standards were contextually based and related to the study referenced. The quality focus was on local meaning and understanding (Garratt & Hodkinson, 1998). Lists indicate a discrete and finite set of steps necessary to achieve validity. Research is generally more open-ended, and thus it was not be advisable to limit methods (Koro-Ljungberg, 2013). Stringent standards remove the artfulness, skill, and judgment from the process of determining quality

(Garratt & Hodkinson, 1998; Rolfe, 2006; Roulston, 2011; Thorne, 1997). All too often criteria are relied on by nascent researchers, as well as those more removed from the research process such as policymakers and practitioners (Hammersley, 2008).

Interview Fidelity Ideals

1. What are the interview fidelity ideals for Hearts on Track study?

To address the first research question, five themes were developed from the 67 qualitative analysis: research contributions, interviewer-participant association, participant accommodation, process and procedures, and data management dimensions.

These categories arose from the smaller codes and sub-codes and themes from the coding process. Through the writing process and researcher reflection, the coding structure was altered to stand at five themes. All themes were specific to the issues of the larger research project since this where they emerged. The following narrative includes a description of the interview fidelity constructs. Subsequently, there is a discussion of the interview fidelity theme in the context of two research paradigms and the intervention fidelity and validity literatures.

Research contributions. The first theme described the time and skills interviewers dedicated to the interview project. Interviewer knowledge became part of the project. For instance, ideas for interview guide questions, participant selection, and memo data were all elements aided by interviewers.

Research contributions in the hearts on track study.

Question composition, revision, and elimination. Interview guides were written by the qualitative supervisor and researcher (A. Garrett) and research team. The researcher had the goal of answering the research questions with the data obtained with the interview questions. The interviewers brought their perspective of what it would be like to ask the questions as well as imagining the participant perspective in answering the questions. Feedback was obtained prior to the commencement of the study and during the study on question performance and utility. Based on such feedback, one question regarding exercise history was only asked at the initial three-month interview instead of at all subsequent interviews (six month, 12 month, and 18 month). Other minor revisions 68 and typos were remedied. Questions were streamlined to read in a simple and straightforward style. When a new protocol was decided upon for an exit interview for the 18-month time point and another guide for standard care participants, interviewers helped clarify and shorten questions based on how they thought participants might perceive the questions and what they thought participants might give for responses.

Participant selection. Since the researcher did not interact directly with study participants and the interviewers often encountered participants, the interviewers had much to offer in suggesting a purposeful sample. All intervention participants were interviewed, but only some had their interview files transcribed as per the project guidelines. These participants were chosen for transcription based on their ability to contribute to the collective knowledge of exercise adherence across a spectrum of contexts. Interviewers knew participants’ stories and made suggestions based on a myriad of contextual variables and other factors drawing on their exceptional knowledge of participant history. The researcher, along with the research team, made the ultimate decision for transcription selection.

Memoing. Interviewers created written data in the form of notes, interviewer notes during and after the interview session. The interview guide included a place for notes for each question. Nonverbal data such as demeanor, eye contact, posture, clothing, and facial expressions were recorded. At the end of each document, there was a place for analytic memos for main ideas, impressions, and summary statements. All of these data helped to contextualize the participant and interview within the study.

Interviewer-participant association. The interviewer-participant relationship became the second theme. This relationship affected the validity of the study in several 69 ways through the roles each occupied. Other factors influencing the bond included how the interviewer processed thoughts about the research, power differentials, and bias.

Interviewer-participant association in the hearts on track study.

Interviewer and participant roles. Interviewers assumed various tasks and roles in their work with participants. Training sessions and booster sessions were used to brainstorm topics and practice icebreakers and small talk topics to foster rapport between interviewers and participants. Interviewers were asked to listen attentively to the participants and their experiences. During the interview, interviewers used short statements and some self-disclosure to provide participants with feedback. The interviewers also used humor and lightheartedness to respond appropriately to participant comments. Interviewers bolstered participants’ confidence by telling them the research team was interested in learning more about an experience in which they were the expert

(exercise adherence with chronic heart failure). Expressing thanks and appreciation for participants’ time was another instance of an interviewer-participant relational role.

Collaborating with the participant in the production of data, the project moved forward across all longitudinal time points. This project was fortunate to have the same interviewers throughout the entire project. This stability was helpful for the participants’ expectations and conviction in the program. Participants’ major task was to attend the interview and participate fully in the interview process.

Interviewer reflexivity. During the initial training all interviewers wrote and shared reflexivity statements explicating the social locations and lenses (race, ethnicity, status, age, nationality, education, and gender) they occupied and how these could impact the research. This was a pivotal point for interviewers to appreciate the many ways their 70 lives intersected with those of their participants and how they could come to understand each other. Additionally, a space for memoing was part of the interview note taking guide for reflective processing throughout the research project.

Power. Power differentials are destructive to truthful data collection. In the case of a medical intervention in which participants were given substantial benefits for participating, interviewers did not want to allow deceit and coercion into the interview space in order to influence participants. The interviewer role was described to participants as a listener and not a judge; participants were to make candid and reflective contributions to the research effort. Interviewers took care to fully explain each of these roles. Interviewers explicated that honest responses were most valuable, even if that meant they were not following the intervention exactly or at all. Participants were informed that all experiences were respected and that they would remain in the study with no loss of benefits as stated in the consent form. Participants were fully aware they were altruistically contributing to research to help others. Participants recorded their exercise in a diary (self report) and with an exercise watch (objective measure). These additional data points may have contributed to participants’ honesty.

Bias. Working together to co-create meaning, interviewers’ subjectivities were an inherent part of the research process. Again, this was the motive for examining and articulating beliefs in reflexivity notes. Objectivity was not seen as attainable or even preferred. Interviewers probed for details rather than assumed to know participant meanings. Throughout the series of interviews over time, interviewers were encouraged to learn small details about the participants. This information helped build the 71 relationship that served as a foundation for the bond over the longitudinal interview process.

Participant accommodation. The third category of interview fidelity ideal described adaptations to the methods of data collection to assist participants with their needs. Health accommodations and content accommodations were the two types of support offered.

Participant accommodation in the hearts on track study.

Health accommodation. Because the interviewers were interviewing participants in the larger RTC with compromised health, interviewers were prepared to support them during the interview process. In our initial training session, we discussed health and illness topics as potential sensitive issues and ways to demonstrate empathy. Participants discussed their body and health and illness status. The preparation was crucial for handling these emotionally intense situations with compassion. Another instance of health accommodations dealt with physical conditions. Interviewers demonstrated flexibility in assisting participants with hearing impairments during the interview by speaking louder and deliberately, sitting on their “good” side, and allowing lip reading.

Another example of aid was praising a stroke patient for her obviously laborious articulations and slowing the pace so as not to make the participant feel rushed. On the whole, participants generally felt understood in their health conditions because all interviewers were nurses. Medical procedures, abbreviations and acronyms, and medications were familiar, which again fostered trust and rapport. Scheduling of interviews often dovetailed with completion of other instruments and was at a site where 72 participants could exercise or visit a doctor, resulting in maximum convenience. When in-person interviews were not possible, telephone interviews served as a backup method.

Content accommodation. Every effort was made to craft questions that were easy to understand and answer. When posing questions, interviewers reformulated questions, used probes, and allowed space for silence. If confusion or general misperception existed after several attempts, interviewers may have made qualified suggestions based on a range of responses from other participants. Typically the participants benefitted from these suggestions and were able to discuss their own original ideas after this spark.

Participants were very robust to merely agreeing to proposed suggestions as their own.

Process and procedures. The elements that make the conduct of the interview function smoothly and quality data to be ascertained are included in the process and procedures section of interview fidelity, the fourth theme from the analysis. Specific components consisted of probing and follow-up questions, the interview structure, and managing the interview.

Process and procedures in the hearts on track study.

Probing and follow-up questions. Formal probes and follow-up questions were written out as part of the interview guide (Appendix F). Interviewers had the liberty to utilize these formal prompts or customize their own. The interviewers were cautious in how they asked for additional information to seem curious and interested rather than pushy. Interviewers made use of silence as a probing mechanism. Interviewers were informed that they were not to take participants’ responses for granted but should seek more specificity, such as definitions and examples. Often, interviewers were able to build on participants’ comments and create an opportunistic, customized prompt. Fortunately, 73 both interviewer and participant had multiple encounters in the longitudinal research context to perfect their technique together. Many example general probing questions were provided and practiced as part of initial and ongoing training exercises.

Interview structure. A semi-structured interview guide was employed, replete with introductory script. Interviewers asked the questions in a friendly, approachable style. Questions, prompts, and the introductory script were customized and made their

“own” with the understanding that essential content and meaning remain. Trying to approximate a conversation, questions may have been asked out of order to follow the participant’s lead. All questions were to be asked; the order was inconsequential. The same set of open questions was asked at 3, 6, and 12 months. Emergent leads (factors affecting exercise adherence) in the form of official novel questions were not followed up on so as to not interfere with the intervention. A new interview protocol at the 18-month or exit session was added as a concession.

Interview management. Interviewers have much to attend to during the interview session. Reading the participant’s nonverbal cues and ascertaining if they matched the verbal comments was one such duty. These messages were recorded on interview guides.

Listening attentively and deciding if or when the participant had diverged enough from the question posed to warrant a redirecting statement was another common decision interviewers in this study made. Overall pace and flow of each interview was unique and had its own rhythm and pattern. The interviewers kept the interview moving with short statements and utterances to demonstrate their focus. The interviewers were mindful that the intent of the interview was on the participant experience. Interviewers allowed the participants to speak freely and consciously limited their dialogue. 74

Training materials. Initial training consisted of a two day, six hour training. This was conducted face-to-face for the Meadowville interviewer and via IP video technology, which allowed both video and audio transmission, for those in Clarkstowne. Topics covered began with listening and reflexivity and then progressed into the conduct of interviews, including questions and behaviors. Interviewers observed the co-construction of data and evaluated model interviews. Ethics and special populations were discussed.

Finally, interviews in the nursing sciences and health field were covered. Some practice of the interview protocol was included. However, at the conclusion of the initial training, it was apparent that more interactive training was needed before interviewers worked with study participants, especially on the specific protocols for the study. So, an extra two-hour qualitative interviewer practice session was scheduled as soon as possible before any interviews took place. Questions, follow-up questions, and probes were modeled and then practiced so they flowed naturally. The interview protocols from the study (participant/patient, coach, session leader) were used to practice interview facilitation and note taking techniques. Interviewers paired up and critiqued one another.

Interview logistics, such as things to do and bring to an interview setting and how to end a session appropriately, were reviewed. Questions and problems were addressed in the session so that interviewers were incredibly at ease with the protocols from the start of the study. Following the initial training, interviewers met with the supervisor roughly every six months during the intervention. Technology was utilized to bridge the distance with Clarkstowne. IP video was used when possible; Adobe Connect was used as the backup method. A wiki was created to share written documents. This was later replaced in favor of direct email transmission. 75

Monitoring the interview process was deemed essential to the intervention for the supervisor/researcher and the larger research team. Sitting in on the interviews would severely interfere with the data collection process. The least obtrusive way of checking in on these interviews was listening to the audio files or reading through the transcripts, provided they were available. When possible, the researcher would both read the transcript and listen to the audio file to get a full sense of the interview experience.

Interviewer notes were also available and provided an added tool. In preparation for each booster training session, the researcher reviewed interviews (transcripts and audio files) to create an agenda. Early on, the researcher kept pace with the project and read all available transcripts and also listened to many of the audio files. As the project advanced, this was not possible and sampling had to take place. To sample files, the researcher first identified interviews that occurred in the time period since the last session. Files were chosen so that a sample was taken across interviewers and time points. The transcript or audio file was reviewed intuitively. A guide is provided based on the types of elements the researcher searched for in the files (see Appendix G). Based on this review, written and oral feedback was provided to the group. Feedback was aimed at all the interviewers as a team so no one felt singled out. If interviewers were of vastly differential levels, generalized feedback would not have been appropriate; individual feedback may have been the best course. At these meetings, the researcher was able to discuss and clarify any issues as well as share things that were going well and other areas where there could be improvement. The researcher shared thoughts openly as did the interviewers during the booster session. Detailed notes were kept of the meetings and posted to the wiki or emailed directly to the interviewers (see Appendix G). 76

Data management dimensions. Issues of data management are not often considered in interpretive research contexts although these matters are no less important

(Bazeley, 2013). Training materials such as the manual and booster documents are referenced. Record keeping including general file handling, missing data management, and equipment usage are detailed.

Data management in the hearts on track study.

Record keeping. Some instances of managing these data include naming files, saving files, storing files (paper/electronic and temporary/long-term), sending files among team members and those outside the research group. Audio files are large and quite common in text-based research. In this study, files had to be removed and stored on another server because space became an issue. Interview note sheets had features to facilitate accurate record keeping including participant number, interview month, site or phone interview, and date, time, and location.

It is vital to know when data are missing and to make attempts to gather the data when possible. Missing data, a perennial problem in intervention trials, was often discussed in booster sessions. The focus of the discussion was centered on finding solutions in the phases of data collection particularly. Telephone interview procedures were reviewed and emphasized. Many other ideas were discussed in these sessions and some were implemented (such as transportation vouchers). Encouraging participants to attend the interview session was critical since there is no method for imputing missing qualitative data.

Problems with digital recording devices resulted in multiple transcripts within one audio file, making transcription confusing and thus more difficult and costly. The 77 supervisor reviewed the situation. A document with the proper method for recording and saving files was sent to all interviewers as a reminder. This material was also covered in the nearest booster session.

Philosophical and validity contexts. With an understanding of the interview fidelity ideals settled, it is now possible to move on to a discussion of the themes in the context of research philosophy. The following research questions are addressed:

2. How can these interview fidelity constructs be operationalized across multiple

paradigms (e.g. postpositivism and constructionism)?

3. How did this researcher navigate between two philosophical perspectives to

arrive at a negotiated path for validity or interview fidelity?

4. What is added to the concept of interview fidelity from treatment fidelity

standards?

5. What is added to the concept of interview fidelity from the validity literature?

Visioning interviews with the two research philosophies used in this research project, allows possible outcomes to materialize. First, the interview fidelity theme is imagined utilizing a postpositivist viewpoint. Next, a constructionist research perspective is employed. Finally, a resolution drawing on the exchange of ideas from each of the paradigms is presented. Throughout these philosophical descriptions, intervention fidelity and validity literature are drawn upon to bolster the assertions.

Philosophical and validity contexts of research contributions. Research contributions to the interview approach as viewed with a postpositivist research paradigm are virtually nonexistent. Data collectors in postpositivist projects do not augment the research effort themselves so as not to bias the study and affect the results. Measurements 78 with well-articulated psychometric properties are employed unaltered. Highly structured interview protocols are administered verbatim (Merriam, 2009). Probability sampling is frequently used to generalize to populations; and therefore interviewer knowledge is not needed for purposefully choosing participants. Observational data may be recorded freely or in checklist format to ascertain nonverbal information.

In the constructionist view, those collecting data should have ample working knowledge of the interview questions and opportunity to alter problematic or underperforming items. Particularistic information regarding the informants is beneficial to elicit rich data on the phenomena. Few have access to such information since this is based on personal experience and time spent with participants (Brink, 1991). Interviewers in this study were asked to provide their input in analytic memos after interviews and at several points when solicited by the researcher to contribute to the analysis. Interviewers were instruments of data collection and worked alongside and with participants to craft the data in the interviews. Therefore, this request aligns with the paradigm (Patton, 2015).

Examining the research contributions in this study, the constructionist leanings are substantial. In addition to merely collecting data, interviewers added vital elements to the study at several points. Interviewers assisted in question composition and revision.

Interviewers reached into their deep and personal repository of information and aided the research team in choosing participants for specialized data analysis. Capturing observational notes as well as adding to the interview dialogue were other ways that interviewers made significant contributions to the research. These contributions transcended the role of neutral postpositivist data collector. 79

Philosophical and validity contexts of interviewer-participant association. To qualify as a postpositivist interviewer, there is often a well-defined set of standards (such as education, experience, or credentials), experience with special populations, and a willingness to participate in training and evaluations to ensure equivalence among personnel (Bellg et al., 2004; Borrelli et al., 2005; Gearing et al., 2011; Moncher & Prinz,

1991). Investigators may be compared and rated across sites, time points, and other demographic dimensions to ensure similarity (Borrelli et al., 2005; Moncher & Prinz,

1991). Differences in style (warmth, credibility, sensitivity, supportiveness) may need to be leveled (Bellg et al., 2004; Borrelli, 2011; Gearing et al., 2011; Robb et al., 2011). The researcher role in postpositivist philosophy is one of impartiality, neutrality, and detachment. Objectivity is emphasized to reduce bias (King & Horrocks, 2010). Due to this concentration on objectivity, interviewers would not focus on themselves as valuable contributors to the research endeavor. It follows then that concerns about power would not typically be a focus from the vantage point of those using this paradigm.

Constructionist interviewers’ most fundamental role in the research process is relationship development for the co-creation of knowledge along with the participant in a subjective exchange (Roulston, 2012). Conversational skills facilitate meeting new participants and in forming the foundation to a friendly atmosphere, and thus are essential elements for interviewing (Berg, 2008; Roulston, 2012). Encouraging the participants to feel comfortable and flexibly bending to their needs and points of interest within the interview context is another task of this interviewer (King & Horrocks, 2010). Listening and responding to participants appropriately within the interview context is vital for effective exchange (Padgett, 2012; Roulston et al., 2008; 2011). Failing to offer proper 80 support can result in a breach of the “interactional requirements of the interview,” whereas offering it validates and reassures participants of their contributions (Miczo,

2003, p. 483). Constructivist participants are thought as possessing “communicative competence” and “inside knowledge of some social world” (Warren 2002 p. 88).

Interviewers persist over time to build and maintain relationships with participants

(Moncher & Prinz, 1991). Reflexive interviewers explore their assumptions about the research topic and participants prior to and throughout the research process and continue during the interview (Hsiung, 2008; Koch, 1996; Roulston, 2012). Interviewers encourage egalitarianism in their working relationship (Isaac & Franceschi, 2008). There is equality inherent in co-constructed data. Investigators reassure participants they will not lose benefits, no matter their responses (Kelly, 2013). Each party in the collaborative, interactive interview experience has subjectivities (Koro-Ljungberg, 2008; Patton, 2015).

These unique perspectives virtually guarantee idiosyncraticity of responses among participants (Mason, 1996). However, the outcome of the interview is not corrupted by bias since the data are cooperatively crafted (Gubrium & Holstein 2002). The subjective data constructed are not false or erroneous (Hammersley (2003). Rather than purge the dataset of highly personalized data, which are the focus, interviewers must be reflective researchers throughout the process (Maxwell, 2013).

Once again constructionism prevails in guiding the research methodology of this project. Whereas postpositivist interviewers act in detached ways in their interactions with participants to increase impartiality, constructionist interviews are predicated on rapport. Interviewers uplifted participants by referring to them as experts and trusted their 81 path forward as equals. Investigator biases were carefully and thoughtfully studied for their impacts.

Philosophical and validity contexts of participant accommodation. Structure and organization are valued in a postpositivist tradition. This holds for research on health issues. Any deviation from protocol would need to be noted and preferably planned a priori so that preparations could be made. Timing and priming effects are possible with multiple forms of data collected (interviews and instruments) at one time (Song,

Sandelowski, & Happ, 2010). Participants could respond differentially to divergent modes of delivery (in person versus telephone interviews). Discussing samples from a range of past participant responses with participants would be discouraged and considered a source of bias.

Constructivists desire to set the participants’ at ease and to foster a filial connection that will enable data gathering about private and potentially emotionally intense topics (Morse, Niehaus, Varnhagan, Austin, & McIntosh, 2008). Interviewers demonstrate “empathic neutrality” and take care not to judge or condemn behavior

(Padgett, 2012, p. 457). Interviewers should prepare for the special challenges with populations they interview, like older populations who may need more time to build rapport, have difficulty staying focused or remembering, experience sensory impairments, tell lengthy stories, and more easily channel strong memories (Patton, 2015;

Wenger, 2003). Telephone interviews do not produce as data as strong as face-to-face interviews, but they can substitute when conditions arise (Berg, 2008; Padgett, 2012).

Having a dialogue around participant responses is not inappropriate in this context. While overtly leading questions and social desirability are discouraged, discussing ideas is part 82 of the process of the co-construction of knowledge (Patton, 2015; Roulston, 2011). It would not differ from hearing a statement in a focus group and then making your own comment. Subjectivity and bias do not have the same negative connotation in this context.

Considering the philosophical views for participant accommodation yet again included more constructionist elements. The participants’ needs were of primary concern in the study and were often placed ahead of data collection needs. Interviewers worked to create an environment that felt secure, where sensitive health topics could be broached. If participants required it, telephone interviews were conducted even though these interviews were missing nonverbal observational data. Postpositivist apprehensions for differential stimulus causing differential responses were overridden by concerns for missing data. Interviewers maintained the flow of the conversation by making suggestions when participants faltered to facilitate the creative process.

Philosophical and validity contexts of process & procedures. Postpositivist researchers would be apt to utilize an established tool validated for their population

(Bellg et al., 2004). It is unlikely that deviations or follow-up questions would be necessary. This line of questioning may be seen as detracting from the validity of the instrument if not repeated verbatim for all participants. A detailed interview guide and protocol standardized for administration is essential for reliable, comparable data (Bellg et al., 2004; Bernard, 2011; Resnick et al., 2005). Questions are organized in the same order, unless multiple forms, with exact question wording used to reduce bias and increase objectivity (Patton, 2015). Similarity in interview length across participants is encouraged to equalize the participant experience (Bellg et al., 2004). Staff development 83 including training manuals, standardized training (initial and booster sessions), measurements for skill attainment, and feedback grounded in supervision are all aspects of fidelity (Bellg et al., 2004; Borrelli, 2011; Borrelli et al., 2005; Eaton et al., 2011;

Gearing et al., 2011; Langberg et al., 2011; Lewis, 2009; Moncher & Prinz, 1991;

Resnick et al., 2005; Resnick et al., 2009; Robb et al., 2011).

Constructionism leads researchers to collaborate with the participant to construct data, including using various probing and follow-up questions when necessary (Dikko,

2016). These questions guide the conversation deeper and garner important details

(Roulston, 2010). The more conversational interview and flexible the interviewer to the cues of the participant, rapport and trust increase, and validity of data follows. Besides being a co-constructor of data, interviewers have other roles in the interview, another as an observer. Using observation skills to examine non-verbal messages in tandem with interview skills aids in understanding the full meaning of the participants’ data (Patton,

2015). Stopping tangents from wasting time and money and setting a good pace also serve the project well but must be done with tact (Roulston et al., 2008). Maintaining the appearance and feel of a conversation while allowing the participant to speak, takes finesse (Roulston et al., 2003). Training improves skill in interviewing (Bernard, 2011).

Multiple forms of training have been found beneficial: modeling, observation, guided practice, discussion, critique, project-based learning, and group collaboration (Hsiung,

2008; Roulston, 2012; Roulston, deMarrais, & Lewis, 2003; Roulston et al., 2008).

Interview processes and procedures took information from both research paradigms presented to enhance validity. Postpositivist rigor lead to interviewers asking all interview questions so that all research questions could be addressed. Example 84 probing and follow-up questions were written out on the interview protocol for use; introductory and conclusion scripts were also available. Constructionism led interviewers to customize the questions and scripts to their own conversational and personal style into a free flowing format. Interviewers followed the participants’ lead rather than the arbitrary interview guide. The overall formatting was midway between these two philosophies, a semistructured layout. In terms of training, a postpositivist essence guided the standardized process. The initial training comprised two binders worth of information and activities; future booster sessions were always preceded with a supervisor evaluation and accompanied with agendas and notes.

Philosophical and validity contexts of data management. Data management is as essential as standardization is necessary in the postpositivist domain. Without standard procedures, the measurements lose their accuracy and internal validity as well as external validity degrade (Gearing et al., 2011). Managing numerical data is part of the data analysis process. While strategies exist for handling missing data, the best solution is preventative.

Managing text data set is now facilitated by technology (VERBI GmbH, 2015).

Missing data are challenging because they are not able to be recaptured. Technology mastery of such devices as digital recorders is now a necessary skill of an interviewer

(Patton, 2015; Roulston, 2012; Roulston et al., 2008).

The importance of conscientiousness in the research process of data collection and management spans all philosophies. Accuracy is vital for numerical calculations; it is also crucial for text data. Software assists researchers in organization and management of all types of data. Instruments collecting data should be reliable whether they are paper 85 forms or electronic audio recording devices. Postpositivist paradigms emphasize the fundamental nature of data management while constructionist philosophies have been slow to accept the significance.

Interview fidelity framework. The interview fidelity process is distinct from treatment fidelity. One aspect of this disparity is in how the process is conceptualized and therefore when during the interview process it is necessary to focus on each interview fidelity ideal.

4. What is added to the concept of interview fidelity from treatment fidelity

standards?

Typically, treatment fidelity is divided into five distinct categories: treatment design, training providers, delivery of treatment, receipt of treatment, and enactment of treatment skills. Taking cues from this fidelity framework, a new standard is rendered for interview fidelity with some similarities and points of divergence. Interview fidelity, while important throughout the research effort, can best be understood as four phases: design, training, maintenance, and evaluation. All aspects that are considered as an interview fidelity ideal can be categorized into one or more of these areas.

Design refers to the familiar research design phase where the project and thus the interviews are planned and written. Training describes the initial teaching and preparation of the interview staff so they are able to carry out their duties professionally. The maintenance phase depicts the continued skill development of the interviewers on staff.

Finally, evaluation denotes the assessment of the interviewers’ abilities in their duties as interviewers. Design and training are typically of greatest concern early in the study. 86

Evaluation and maintenance form an iterative feedback loop to continually sharpen the expertise and proficiency of the interviewers as the study is progressing.

A summary of each study sub-theme and how it fits into the interview fidelity process is displayed in Table 4.1.

Table 4.1

Interview Fidelity Process

Interview Fidelity Interview Fidelity Process

Themes and Sub-themes Design Training Maintenance Evaluation

Research Contributions Question composition… X X X Participant selection X X X Memoing X X X X

Interviewer-Participant Association Interviewer/participant roles X X X Interviewer reflexivity X X Power X X X Bias X X

Participant Accommodation Health accommodation X X X Content accommodation X X X

Process & Procedures Probing and follow-up questions X X X X Interview structure X X X X Interview management X X X Training materials X X X X

Data Management Dimensions Record keeping X X X X

The categories of interview fidelity are not mutually exclusive. A sub-theme may be central to the interview fidelity process across all four phases or in only one phase. The sub-themes and corresponding phase descriptions presented are dependent on the larger study referenced in this dissertation. 87

CHAPTER 5: Discussion and Conclusions

Summary of Major Findings

Five themes emerged from the study data that are part of guide interview fidelity:

(1) research contributions, (2) interviewer-participant association, (3) participant accommodation, (4) process and procedures, and (5) data management dimensions.

Interviewers had important research contributions to make when conducting interpretive interviews. Recommended sources of knowledge included question consultation, participant selection, and memo production. The relationship between the participant and interviewer was paramount. To support these relationships, interviewers were clear about their roles, reflexive about themselves in the research context, and aware of the potential for coercion. Participant accommodations aided in the collection of knowledge through supportive changes. Interviewers adapted the interview experience to be increasingly comfortable and safe for the participant experience. Through the process and procedures element, interviewers demonstrated their interest in the participant’s comments by probing for more information and attending to non-verbal cues. Interviewers customized the structured interview guide to smooth the dialogue into a conversational style.

Training sessions were used to streamline and refine data collection efforts. Data management incorporated ways of effectively organizing and managing text and audio data.

Validity Contributions of Interview Fidelity Ideals

Each of the identified interview fidelity themes added to the ultimate goal of increased validity for the interviews and for the study. The impact of each was unique. 88

By applying multiple research philosophies and validity literature, the efficacy of these ideals is revealed.

Validity value of research contributions in this study. Interview fidelity was enhanced by examining the research contributions the interviewers made to the study.

Lack of understanding or inadequate responses on the part of participants could have resulted in invalid data; therefore, it was imperative to have coherent questions. The quality of interview questions was reinforced when interviewers contributed their comments during development and later when they customized them during the interview process. Choosing the strongest participants for transcription and analysis yielded data that could address the research questions. The researcher mined the interviewers’ knowledge to purposefully select participants based on distinctive characteristics and contexts. The addition of memos and notes contributed extra perspective to the interview

(Merriam, 2009). More data points served to build support for the participants’ statements. Introductory analysis began with these interpretive memos based on impressions and summary comments.

Validity value of interviewer-participant association in this study.

Interviewers and participants had defined roles within the interview process that were contingent on their relationship. For data to be generated with interpretive interviews, trust was first established between the two parties. Interviewers espoused genuineness, reflexively considering their place in the research. The interviewers’ strength of internalization of introspection steered the direction of the interview and the level of bonding attained. Participants, feeling valued, produced data in authentic interactions 89 with interviewers. The validity of these data was bolstered as an artifact of the strength of the interviewer-participant relationship.

Validity value of participant accommodation in this study. Not just more data, but data of great depth were created in an environment where participants were respected, regardless of health or illness status by the caring medical professionals who served as interviewers. When participants felt safe enough to trust these interviewers, authentic data were revealed. The nursing interviewers in this study followed the participants’ lead whenever possible, shared encouraging comments when appropriate, demonstrated supportive behaviors (like speaking into a “good ear”), and made comments specific to the questions to evoke inspiration as a co-participant in the research experience.

Validity value of process and procedures in this study. Many aspects of process and procedures contributed to improved fidelity of the interview experience.

Interviewers used formal probes and adapted and created probes as necessary to allow participants to add details, elaborate, and clarify their statements (Patton, 2015). Overall, structure of the interview was upheld with some flexibility added where appropriate and beneficial to the study overall and to data quality (Trainor, 2013). Interviewers learned to multitask, to manage the many facets of interviews effectively, and to capture the available data by listening, note taking, and following participants’ cues. Consistency and careful calibration of the interview staff through an intensive initial training and subsequent booster session trainings kept interviewers centered on the research goals and their skills sharp. The written materials assisted interviewers in becoming highly skilled data facilitators. 90

Validity value of data management in this study. Data management was vital to the pursuit of interview fidelity. Maintaining accurate records focused the resources on the remaining project goals. Keeping clean records was a large task in a multi-site team- based study. Technology was an important tool in handling data. It was used in collecting data (digital recording devices), for analyzing data (software), and for communicating among interview team members (audio and visual platforms).

Findings Related to Literature

Health and medicine disciplines have utilized mixed methods designs more often than any other social or behavioral discipline to answer their multifaceted research questions (Ivankova & Kawamura, 2010). The mixed methods research community insists on new tools and thinking to understand current and future research dilemmas

(Hesse-Biber & Johnson, 2013). Yet, individuals and teams of researchers struggle to communicate effectively and ultimately choose from among the numerous paradigms available for their inquiry process within a mixed methods framework (Denzin &

Lincoln, 2011; Gardner, 2012; Lather, 2006; Mertens, 2013). Examining and elucidating paradigm assumptions is considered essential for legitimation (O’Cathain, 2010;

Onwuegbuzie & Johnson, 2006). Validity, as a form of communication, can bridge disparate research cultures and may actually assist in clarifying this process (Mesel,

2013). Ivankova and Kawamura (2010) suggest that those with “different epistemological traditions can inform and enhance each other’s mixed methodological practices” (p. 605).

Looking closer at the research tools in health sciences, particularly the RCT, there is mounting discontent regarding its unparalleled status. Devisch and Murray (2009) in their theoretical deconstruction recommended widening the definition of what constitutes 91 evidence beyond the RCT in evidence-based medicine (EBM). Research design alone does not ensure quality, even with the venerated RCT (Grossman, 2008). The postpositivist intervention can be bolstered with divergent methods guided by a distinct epistemological tradition, replete with its own set of assumptions (Haslum, 2007). Thus, a case for interpretive interviews to add to the RCT as a mixed methods study can be made.

Undeniably, interpretive research adds quality to RCT and mixed methodologies if conducted with integrity. The question of validity for health researchers is serious and continuous (Koro-Ljundberg, 2013). Researchers and interviewers must appreciate the epistemological ramifications of the interview as a whole process (Roulston, 2012;

Trainor, 2013).

What has been brought forth in this study is a novel process for considering and improving the validity of interviews within the context of a mixed method trial study by considering the process of establishing interview fidelity. This dissertation research advocated interview fidelity ideals and discouraged a simplistic criteria approach, which is in line with much research on validity (Atkinson, 1995; Denzin & Lincoln, 2011b;

Maxwell, 2013; Seale, 1999). Since there is no single set of interview fidelity ideals that could be applied to all interview studies in all contexts, researchers must possess the expertise to plan and review their own studies to guarantee validity is established. The goal of this procedure of establishing interview fidelity was to provide examples of considerations for interview quality that arose during the course of research. These examples may serve others as they attempt similar work.

“Methodological discussions of the quality of research, if they have any use at all,

benefit the quality of research by encouraging a degree of awareness about the 92

methodological implications of particular decisions made during the course of a

project…guard against more obvious errors…give ideas for those running short

on these during the course of a project” (Seale, 1999, p. 475).

Limitations

The requirements of each study are unique and so it follows that the elements of interview fidelity would be distinct as well. The components of interview fidelity were considered and selected based on the exemplar study context and its inherent ontological foundations and epistemological beliefs.

This study focused on design and data collection and training elements. There are many factors affecting validity and thus interview fidelity in data analysis, interpretation, report writing, and dissemination phases. This was beyond the scope of the research and could be a focus for future research.

Since the focus of this dissertation was on the methods and not on content, no new data were collected. This influenced the layout and content of the dissertation. One exemplar study was utilized to provide context for the interview fidelity concept.

However, the researcher did spend significant time gathering, preparing, and organizing files for analysis and loading these into an online repository.

Implications

Conversation surrounding a new research tool for validity would benefit several groups. First, mixed methods scholars have another tool to assist them in paradigmatic reasoning when using interviews. Next, researchers, and particularly health researchers, have examples for bolstering the validity in their interview studies. Finally, reviewers 93 could consider such reasoning an important example for how researchers navigate between multiple paradigms.

Those individuals who may wish to conduct a review for interview fidelity ideals within their own studies to augment or ensure validity may certainly do so. The following serves as a suggestion for conducting an investigation for a study of one’s own. To begin, a collection and review of materials and communication pertaining to the interview process should be examined and analyzed. If such an extensive evaluation is not possible due to lack of resources or other reasons, then great care should be taken in the planning of the interviews and pauses should be taken at decision points. Research paradigms associated with the project must be articulated. Research philosophy should inform the methods. In the planning and empty spaces, the researcher can enter into a dialogue with the philosophy and method to decide on a path forward. Ideals are articulated. Each ideal may also be reinforced by the intervention and validity literatures, which was one goal of this dissertation. The attainment of interview fidelity and validity is a critical component of research utilizing interviews as sources of data. Researchers must understand both the breadth and depth of these processes in order to minimize threats to reliability and validity in mixed-methods research.

94

References

Abowitz, D. A., & Toole, T. M. (2010). Mixed method research: Fundamental issues of

design, validity, and reliability in construction research. Journal of Construction

Engineering and Management, 136(1), 108-116. doi: http://dx.doi.org/10.1061

/(asce)co.1943-7862.0000026

Albright, C. L., Pruitt, L., Castro, C., Gonzalez, A., Woo, S., King, A. C. (2005).

Modifying physical activity in a multiethnic sample of low-income women: one-

year results from the IMPACT (Increasing for Physical ACTivity)

project. Annals of Behavioral Medicine, 30(3), 191-200. doi:

http://dx.doi.org/10.1207/s15324796abm3003_3

Alise, M. A., & Teddlie, C. (2010). A continuation of the paradigm wars? Prevalence

rates of methodological approaches across the social/behavioral sciences. Journal

of Mixed Methods Research, 4(2), 103-126. doi:

http://dx.doi.org/10.1177/1558689809360805

Altheide, D. L., & Johnson J. M. (1994). Criteria for assessing interpretive validity in

qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Sage Handbook of

qualitative research (pp. 485-499). Newbury Park, CA: Sage.

Appel, L. J., Champagne, C. M., Harsha, D. W. (2003). Effects of comprehensive

lifestyle modification on blood pressure control: Main results of the PREMIER

clinical trial. JAMA, 289(16), 2083-2093. doi:

http://dx.doi.org/10.1001/jama.289.16.2083 95

Appleton, J. V. (1995). Analysing qualitative interview data: Addressing issues of

validity and reliability. Journal of Advanced Nursing, 22, 993-997. doi:

http://dx.doi.org/10.1111/j.1365-2648.1995.tb02653.x

Atkinson, P., & Silverman, D. (1997). ‘Kundera’s immortality: The interview society and

the invention of the self.’ Qualitative Inquiry, 3(3), 304-325. doi:

http://dx.doi.org/10.1177/107780049700300304

Avis, M. (1995). Valid ? A consideration of the concept of validity in

establishing the credibility of research findings. Journal of Advanced Nursing, 22,

1203-1209. doi: http://dx.doi.org/10.1111/j.1365-2648.1995.tb03123.x

Bailey, P. H. (1996). Assuring quality in narrative analysis. Western Journal of Nursing

Research, 18(2), 186-194. doi: http://dx.doi.org/10.1177/019394599601800206

Barbour, R. S., & Featherstone, V. A. (2000). Acquiring qualitative skills for primary

care research. Review and reflections on a three-stage workshop. Part 1: Using

interviews to generate data. Family Practice, 17, 76-82. doi:

http://dx.doi.org/10.1093/fampra/17.1.76

Barbour, R. (2008). Introducing qualitative research: A student’s guide to the craft of

doing qualitative research. Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9780857029034

Beitan, B. K. (2012). Interview and sampling: How many and whom. In J. F. Gubrium, J.

A. Holstein, A. B. Marvasti, & K. D. McKinney (Eds.), The SAGE handbook of

interview research: The complexity of the craft (2nd ed.) (pp. 243-253). Thousand

Oaks: Sage. doi: http://dx.doi.org/10.4135/9781452218403.n17 96

Belgrave, L. L., & Smith, K. J. (1995). Negotiated validity in collaborative ethnography.

Qualitative Inquiry, 1(1), 69-86. doi:

http://dx.doi.org/10.1177/107780049500100105

Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., . . .

Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change

studies: Best practices and recommendations from the NIH behavior change

consortium. Health Psychology, 23(5), 443-451. doi:

http://dx.doi.org/10.1037/0278-6133.23.5.443

Benton, T., & Craig, C. (2001). Philosophy of social science: The philosophical

foundations of social thought. London: Palgrave.

Berg, B. L. (2008). Qualitative research methods for the social sciences (7th ed.). Boston:

Allyn & Bacon.

Bernard, H. R. (2006). Research methods in anthropology (4th ed.). Lanham, MD:

AltaMira.

Biesta, G. (2010). Pragmatism and the philosophical foundations of mixed methods

research. In A. Tashakkori, & C. Teddlie (Eds.), (pp. 95-117). Thousand Oaks:

Sage. doi: http://dx.doi.org/10.4135/9781506335193.n4

Black, K. J., Wenger, M. B., & O’Fallon, M. (2015). Developing a fidelity assessment

instrument for nurse home visitors. Research in Nursing & Health, 38, 232-240.

doi: http://dx.doi.org/10.1002/nur.21652

Bogdan, R. C., & Biklen, S. K. (2007). Qualitative research for education: An

introduction to theory and methods (5th ed.). Boston: Pearson. 97

Bolton, D. (2008). The epistemology of randomized, controlled trials and application in

psychiatry. Philosophy, Psychiatry, & Psychology, 15(2), 159-165. doi:

http://dx.doi.org/10.1353/ppp.0.0171

Borg, W. R., & Gall, M. D. (1989). Education research: An introduction (5th ed.) White

Plains, NY: Longman.

Borer, M. I., & Fontant, A. (2012). Postmodern trends: Expanding the horizons of

interviewing practices and epistemologies. In J. F. Gubrium, J. A. Holstein, A. B.

Marvasti, & K. D. McKinney (Eds.), The SAGE handbook of interview research:

The complexity of the craft (2nd ed.) (pp. 45-60). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n4

Borrelli, B. (2011). The assessment, monitoring, and enhancement of treatment fidelity in

public health clinical trials. Journal of Public Health Dentistry, 71(s1), S52-S63.

doi: http://dx.doi.org/10.1111/j.1752-7325.2011.00233.x

Borrelli, B., Sepinwall, D., Ernst, D., Breger, R., DeFrancesco, C., Czajkowski, S., . .

.Ogedegbe, G. (2005). A new tool to assess treatment fidelity and evaluation of

treatment fidelity across 10 years of health behavior research. Journal of

Consulting and Clinical Psychology, 73(5), 852-860. doi:

http://dx.doi.org/10.1037/0022-006x.73.5.852

Bosak, K., Pozehl, B., & Yates, B. (2012). Challenges of applying a comprehensive

model of intervention fidelity. Western Journal of Nursing Research, 34(4), 504-

519. doi: http://dx.doi.org/10.1177/0193945911403774 98

Boulton, M., & Fitzpatrick, R. (1996). Qualitative research in health care II: A structured

review and evaluation of studies. Journal of Evaluation in Clinical Practice, 2(3),

171-179. doi: http://dx.doi.org/10.1111/j.1365-2753.1996.tb00041.x

Brannen, J., & Halcomb, E. J. (2009). Data collection in mixed methods research. In S.

Andrew & E. J. Halcomb (Eds.), Mixed methods research for nursing and the

health sciences (pp. 67-83). West Sussex, UK: Blackwell Publishing. doi:

http://dx.doi.org/10.1002/9781444316490.ch5

Brink, P. J. (1991). Issues of reliability and validity. In J. M. Morse (Ed.), Qualitative

nursing research: A contemporary dialogue (pp. 164-186). Newbury Park, CA:

Sage. doi: http://dx.doi.org/10.4135/9781483349015.n20

Britten, N., Jones, R., Murphy, E., & Stacy, R. (1995). Qualitative research methods in

general practice and primary care. Family Practice, 12(1), 104-114. doi:

http://dx.doi.org/10.1093/fampra/12.1.104

Bruckenthal, P., & Broderick, J. E. (2007). Assessing treatment fidelity in pilot studies

assist in designing clinical trials: An illustration from a nurse practitioner

community-based intervention for pain. Advances in Nursing Science, 30(1), E72-

E84. doi: http://dx.doi.org/10.1097/00012272-200701000-00015

Brydon-Miller, M., Kral, M., Maguire, P., Noffke, S., & Sabhlok, A. (2011). The roots

and riffs on participatory action research. . In N.K. Denzin & Y.S. Lincoln (Eds.),

The Sage handbook of qualitative research (4nd ed., pp. 387-400).Thousand Oaks,

CA: Sage. 99

Bryman, A. (2006). Paradigm peace and the implications for quality. International

Journal of Methodology, 9(2), 111-126. doi:

http://dx.doi.org/10.1080/13645570600595280

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative

and mixed methods research: A view from social policy. International Journal of

Social Research Methodology, 11(4), 261-276. doi:

http://dx.doi.org/10.1080/13645570701401644

Burns, N., & Grove, S. K. (2005). The practice of nursing research: Conduct, critique,

and utilization (5th ed.). St. Louis, MO: Elsevier.

Butterfield, L. D., Borgen, W. A. , Amundson, N. E., & Maglio, A. T. (2005). Fifty years

of the critical incident technique: 1954-2004 and beyond. Qualitative Research, 5,

475-497. doi: http://dx.doi.org/10.1177/1468794105056924

Butterfield, L. D., Borgen, W. A. , Amundson, N. E., & Erlebach, A. C. (2010). What

helps and hinders workers in managing change? Journal of Employment

Counseling, 47, 146-156. doi: http://dx.doi.org/10.1002/j.2161-

1920.2010.tb00099.x

Cain, S. (2013). Quiet: The power of introverts in a world that can’t stop talking. New

York: Crown Publishing Group.

Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings.

Psychological Bulletin, 54(4), 297-312. doi:

http://dx.doi.org/10.1037/h0040950

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs

for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching 100

(pp. 171-246). Chicago: Rand McNally & Company.

Caracelli, V. J. (1994). Mixed-method evaluation: Developing quality criteria through

concept mapping. Evaluation Practice, 15(2), 139-152.

doi:10.1016/0886-1633(94)90005-1

Carels, R. A., Darby, L.A., Cacciapaglia, H. M., Douglass, O. M. (2004). Reducing

cardiovascular risk factors in postmenopausal women through a lifestyle change

intervention. Journal of Women’s Health, 13(4), 412- 426. doi:

http://dx.doi.org/10.1089/154099904323087105

Carpenter, J. S., Burns, D. S., Wu, J., Yu, M., Ryker, K., Tallman, E., & Von Ah, D.

(2013). Strategies used and data obtained during treatment fidelity monitoring.

Nursing Research, 62(1), 59-65. doi:

http://dx.doi.org/10.1097/nnr.0b013e31827614fd

Carter, S. M., & Little, M. (2007). Justifying knowledge, justifying method, taking

action: epistemologies, methodologies, and methods in qualitative research.

Qualitative Health Research, 17(10), 1316-1328. doi:

http://dx.doi.org/10.1177/1049732307306927

Cho, J., & Trent, A. (2006). Validity in qualitative research revisited. Qualitative

Research, 6(3), 319-340. doi: http://dx.doi.org/10.1177/1468794106065006

Collins, K. M. T., Onwuegbuzie, A. J., & Johnson, R. B. (2012). Securing a place at the

table: A review and extension of legitimation criteria for the conduct of mixed

research. American Behavioral Scientist, 56(6), 849-865.

doi: http://dx.doi.org/10.1177/0002764211433799 101

Cook, T. D., & Campbell, D. T. (2004). Validity. In C. Seale (Ed.). Social research

methods: A reader (pp. 48-53). London: Routledge.

Creswell, J. W. (2013). Qualitative inquiry & research design (3rd ed.). Thousand Oak:

Sage.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry.

Theory into Practice, 39(3), 124-130. doi:

http://dx.doi.org/10.1207/s15430421tip3903_2

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods

research (2nd ed.). Thousand Oaks: Sage.

Creswell, J. W., Fetters, M. D., Plano Clark, V. L., & Morales, A. (2009). Mixed methods

intervention trials. In S. Andrew, & E. J. Halcomb (Eds.), Mixed methods

research for nursing and the health sciences (pp.161-180). Chichester, UK:

Blackwell Publishing. doi: http://dx.doi.org/10.1002/9781444316490.ch9

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests.

Psychological Bulletin, 52(4), 281-302. doi:

http://dx.doi.org/10.1037/h0040957

Crotty, M. (1998). The foundations of social research: Meaning and perspective in the

research process. London: Sage.

Daly, J., McDonald, I., & Willis, E. (1992). Why don’t you ask them? A qualitative

research framework for investigating the diagnosis of cardiac normality. In J.

Daly, I. McDonald, & I. Willis (Eds.), Researching health care: Designs,

dilemmas, disciplines (pp. 189-206). London: Routledge. 102

Davies, D., & Dodd, J. (2002). Qualitative research and the question of rigor. Qualitative

Health Research, 12(2), 279-289. doi:

http://dx.doi.org/10.1177/104973230201200211

Delaney, T. (2014). Classical and contemporary social theory: Investigation and

application. Boston: Pearson.

Dellinger, A. B., & Leech, N. L. (2007). Toward a unified validation framework in mixed

methods research. Journal of Mixed Methods Research, 1(4), 309-332. doi:

http://dx.doi.org/10.1177/1558689807306147

Denzin, N. K. (2008). The new paradigm dialogs and qualitative inquiry. International

Journal of Qualitative Studies in Education, 21(4), 315-325. doi:

http://dx.doi.org/10.1080/09518390802136995

Denzin, N. K., & Lincoln, Y. S. (2011a). Introduction: The discipline and practice of

qualitative research. In N. K. Denzin, & Y. S. Lincoln (Eds.), The SAGE

Handbook of Qualitative Research (pp. 1-19). Thousand Oaks: Sage.

Denzin, N. K., & Lincoln, Y. S. (2011b). Preface. In N. K. Denzin, & Y. S. Lincoln

(Eds.), The SAGE Handbook of Qualitative Research (pp. ix-xvi). Thousand

Oaks: Sage.

Devisch, I., & Murray, S. J. (2009). ‘We hold these truths to be self-evident’:

Deconstructing ‘evidence-based medical practice. Journal of Evaluation in

Clinical Practice, 15, 950-954. doi: http://dx.doi.org/10.1111/j.1365-

2753.2009.01232.x

DeVon, H. A., Block, M. E., Moyle-Wright, P., Ernst, D. M., Hayden, S. J., Lazzara, D.

J.,…Kostas-Polston, E. (2007). A psychometric toolbox for testing validity and 103

reliability. Journal of Nursing Scholarship, 39(2), 155-162. doi:

http://dx.doi.org/10.1111/j.1547-5069.2007.00161.x

DiCicco-Bloom, B. & Crabtree, B. F. (2006). The qualitative research interview. Medical

Education, 40, 314-321. doi: http://dx.doi.org/10.1111/j.1365-

2929.2006.02418.x

Dikko, M. (2016). Establishing construct validity and reliability: Pilot testing of a

qualitative interview for research in takaful (Islamic insurance). The Qualitative

Report, 21(3), 521-528.

Dropbox Pro [Computer software]. San Francisco, CA: Dropbox, Inc.

Dyas, J. V., Togher, F., & Siriwardena, A. N. (2014). Intervention fidelity in primary care

complex intervention trials: Qualitative study using telephone interviews of

patients and practitioners. Quality in Primary Care, 22, 25-34.

Eakin, E. G., Bull, S. S., Riley, K. M., Reeves, M. M., McLaughlin, P., & Gutierrez, S.

(2007). Resources for health: a primary- care-based diet and physical activity

intervention targeting urban Latinos with multiple chronic conditions. Health

Psychology, 26(4), 392-400. doi: http://dx.doi.org/10.1037/0278-

6133.26.4.392

Eaton, L. H., Doorenbos, A. Z., Schmitz, K. L., Carpenter, K. M., & McGregor, B. A.

(2011). Establishing treatment fidelity in a web-based behavioral intervention

study. Nursing Research, 60(6), 430-435. doi:

http://dx.doi.org/10.1097/nnr.0b013e31823386aa

Emden, C., & Sandelowski, M. (1998). The good, the bad and the relative, part one:

Conceptions of goodness in qualitative research. International Journal of Nursing 104

Practice, 4, 206-212. doi: http://dx.doi.org/10.1046/j.1440-

172x.1998.00105.x

Emden, C., & Sandelowski, M. (1999). The good, the bad and the relative, part two:

Conceptions of goodness in qualitative research. International Journal of Nursing

Practice, 5, 2-7. doi: http://dx.doi.org/10.1046/j.1440-172x.1999.00139.x

Eisenhart, M. A., & Howe, K. R. (1992). In M. D. LeCompte, W. L. Millroy, & J.

Preissle (Eds.). The handbook of qualitative research in education (pp. 644-680).

San Diego: CA: Academic Press.

Everett, B., Davidson, P. M., Sheerin, N., Salamonson, Y., &DiGiacomo, M. (2008).

Pragmatic insights into a nurse-delivered motivational interviewing intervention

in the outpatient cardiac rehabilitation setting. Journal of Cardiopulmonary

Rehabilitation and Prevention, 28, 61-64. doi:

http://dx.doi.org/10.1097/01.hcr.0000311511.23849.35

Ezzy, D. (2002). Qualitative analysis: Practice and innovation. London: Routledge. doi:

http://dx.doi.org/10.4324/9781315015484

Fitzpatrick, R., & Boulton, M. (1996). Qualitative research in health care: I. The scope

and validity of methods. Journal of Evaluation in Clinical Practice, 2(2), 123-

130. doi: http://dx.doi.org/10.1111/j.1365-2753.1996.tb00036.x

Forchuk, C., & Roberts, J. (1993). How to critique qualitative research articles. Canadian

Journal of Nursing Research, 25(4), 47-55.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and

evaluating qualitative research. Australian & New Zealand Journal of Psychiatry,

36, 717-732. doi: http://dx.doi.org/10.1046/j.1440-1614.2002.01100.x 105

Gage, N. L. (1989). The paradigm wars and their aftermath: A “historical” sketch of

research on teaching since 1989. Educational Researcher, 18(7), 4-10. doi:

http://dx.doi.org/10.3102/0013189x018007004

Gardner, S. K. (2012). Paradigmatic differences, power and status: A qualitative

investigation of faculty in one interdisciplinary research collaboration on

sustainability science. Sustainability Science, 8, 241-252. doi:

http://dx.doi.org/10.1007/s11625-012-0182-4

Garratt, D., & Hodkinson, P. (1998). Can there be criteria for selecting research

criteria?—A hermeneutical analysis of an inescapable dilemma. Qualitative

Inquiry, 4(4), 515-539. doi:

http://dx.doi.org/10.1177/107780049800400406

Gearing, R. E., El-Bassel, N., Ghesquiere, A., Baldwin, S., Gillies, J., & Ngeow, E.

(2011). Major ingredients of fidelity: A review and scientific guide to improving

quality of intervention research implementation. Clinical Psychology, 31, 79-88.

doi: http://dx.doi.org/10.1016/j.cpr.2010.09.007

Giddings, L. S., & Grant, B. M. (2009). From rigour to trustworthiness: Validating mixed

methods. In S. Andrew, & E. J. Halcomb (Eds.), Mixed methods research for

nursing and the health sciences (pp.119-134). Chichester, UK: Blackwell

Publishing. doi: http://dx.doi.org/10.1002/9781444316490.ch7

Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded

theory. Mill Valley, CA: Sociology Press.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory. Strategies for

qualitative research. Chicago: Aldine. 106

Goldenberg, M. J. (2006). On evidence and evidence-based medicine: Lessons from the

philosophy of science. Social Science & Medicine, 62, 2621-2632. doi:

http://dx.doi.org/10.1016/j.socscimed.2005.11.031

Graue, E. & Karabon, A. (2013). Standing at the corner of epistemology ave, theoretical

trail, methodology blvd, and methods street: The intersections of qualitative

research. In A. A. Trainor, & E. Graue (Eds.), Reviewing qualitative research in

the social sciences (pp. 11-20). New York: Routledge.

Green, B. B., McAfee, T., Hindmarsh, M., Madsen, L., Caplow, M., & Buist, D. (2002).

Effectiveness of telephone support in increasing physical activity levels in

primary care patients. American Journal of Preventive Medicine, 22(3),177-183.

doi: http://dx.doi.org/10.1016/s0749-3797(01)00428-7

Greene, J. C. (2007). Mixed methods in social inquiry. San Francisco, CA: Jossey Bass.

Greene, J. C., & Hall, J. N. (2010). Dialetics and pragmatism: Being of consequence. In

A. Tashakkori & C. Teddlie (Eds.), Mixed methods in social and behavioral

research (pp. 119-143). Thousand Oaks, CA: Sage. doi:

http://dx.doi.org/10.4135/9781506335193.n5

Grinyer, A., & Thomas, C. (2012). The value of interviewing on multiple occasions or

longitudinally. In J. F. Gubrium, J. A. Holstein, A. B. Marvasti, & K. D.

McKinney (Eds.), The SAGE handbook of interview research: The complexity of

the craft (2nd ed.) (pp. 219-230). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n15 107

Grossman, J. (2008). A couple of the nasties lurking in evidence-based medicine. Social

Epistemology, 22(4), 333-352. doi:

http://dx.doi.org/10.1080/02691720802566961

Guba, E. (1990). The alternative paradigm dialog. In E. G. Guba (Ed.). The paradigm

dialog (pp. 17-27). Newbury Park, CA: Sage.

Guba E., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA:

Sage.

Guba E., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K.

Denzin, & Y. S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 105-117).

Thousand Oaks, CA: Sage.

Guba E., & Lincoln, Y. S. (2005). Paradigms and perspectives in contention. In N. K.

Denzin, & Y. S. Lincoln (Eds.), Handbook of Qualitative Research (3rd ed.) (pp.

183-190). Thousand Oaks, CA: Sage.

Gubrium, J. F., & Holstein, J. A. (2002). From the individual interview to the interview

society. In J. F. Gubrium & J. A. Holstein (Eds.), Handbook of Interview

Research: Context and method (pp. 3-32). Thousand Oaks, CA: Sage. doi:

http://dx.doi.org/10.4135/9781412973588.d3

Gubrium, J. F., Holstein, J. A., Marvasti, A. B., & McKinney, K. D. (2012). Introduction:

The complexity of the craft. In J. F. Gubrium, J. A. Holstein, A. B. Marvasti, &

K. D. McKinney (Eds.), The SAGE handbook of interview research: The

complexity of the craft (2nd ed.) (pp. 1-5). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n1 108

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An

with data saturation and variability. Field Methods, 18(1), 59-82. doi:

http://dx.doi.org/10.1177/1525822x05279903

Hall, B., & Howard, K. (2008). A synergistic approach: Conducting mixed methods

research with typological and systemic design considerations. Journal of Mixed

Methods Research, 2(3), 248-269. doi:

http://dx.doi.org/10.1177/1558689808314622

Hall, J. M., & Stevens, P. E. (1991). Rigor in feminist research. Advanced Nursing

Science, 13(3), 16-29. doi: http://dx.doi.org/10.1097/00012272-199103000-

00005

Hammersley, M. (1987). Some notes on the terms ‘validity’ and ‘reliability.’ British

Educational Research Journal, 13(1), 73-81. doi:

http://dx.doi.org/10.1080/0141192870130107

Hammersley, M. (1992). What’s wrong with ethnography? London: Routledge. doi:

http://dx.doi.org/10.4324/9781315002675

Hammersley, M. (1998). Reading ethnographic research: A critical guide (2nd ed.).

Essex, UK: Addison Wesley Longman Limited.

Hammersley, M. (2003). Recent radical criticism of interview studies: Any implications

for the sociology of education? British Educational Research Journal, 24(1), 119-

126. doi: http://dx.doi.org/10.1080/01425690301906

Hammersley, M. (2008a). Assessing validity in social research. In P. Alasuutari, L.

Bickman, & J. Brannan (Eds.), The sage handbook of social research methods 109

(pp. 42-53). London: Sage. doi:

http://dx.doi.org/10.4135/9781446212165.n4

Hammersley, M. (2008b). Questioning qualitative inquiry: Critical essays. London: Sage.

Haslum, M. N. What kind of evidence do we need for evaluating therapeutic

interventions? Dyslexia, 13, 234-239. doi: http://dx.doi.org/10.1002/dys.354

Healy, S. (2003). Epistemological pluralism and the ‘politics of choice.’ Futures, 35,

689-701. doi: http://dx.doi.org/10.1016/s0016-3287(03)00022-3

Hesse-Biber, S. (2010). Qualitative approaches to mixed methods practice. Qualitative

Inquiry, 16(6), 455-468. doi: http://dx.doi.org/10.1177/1077800410364611

Hesse-Biber, S. (2014). Feminist research practice: A primer (2nd ed.). Thousand Oaks:

Sage.

Hesse-Biber, S., & Johnson, R. B. (2013). Coming at things differently: Future directions

of possible engagement with mixed methods research. Journal of Mixed Methods

Research, 7(2), 103-109. doi: http://dx.doi.org/10.1177/1558689813483987

Hope, K. W., & Waterman, H. A. (2003). Praiseworthy pragmatism? Validity and action

research. Journal of Advanced Nursing, 44(2), 120-127. doi:

http://dx.doi.org/10.1046/j.1365-2648.2003.02777.x

Holstein J., & Gubrium, J. F. (1995). The active interview. London: Sage. doi:

http://dx.doi.org/10.4135/9781412986120

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12,

307-312. doi: http://dx.doi.org/10.1046/j.1365-2702.2003.00683.x 110

Howe, K. R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas

die hard. Educational Researcher, 17(8), 10-16. doi:

http://dx.doi.org/10.3102/0013189x017008010

Isaac, C. A., & Franceschi, A. (2008). EBM: Evidence to practice and practice to

evidence. Journal of Evaluation in Clinical Practice, 14, 656-659. doi:

http://dx.doi.org/10.1111/j.1365-2753.2008.01043.x

Ivankova, N., & Kawamura, Y. (2010). Emerging trends in the utilization of integrated

designs in the social, behavioral, and health sciences. In S. A. Tashakkori, & C.

Teddlie (Eds.), SAGE handbook of mixed methods research (2nd ed.) (pp. 581-

611). doi: http://dx.doi.org/10.4135/9781506335193.n21

Jacobs, A. D., Ammerman, A. S., Ennett, S. T. (2004). Effects of a tailored follow-up

intervention on health behaviors, beliefs, and attitudes. Journal of Women’s

Health, 13(5), 557-568. doi: http://dx.doi.org/10.1089/1540999041281016

Janesick, V. J. (2000). The choreography of qualitative research design: Minuets,

improvisations, and crystallization. In N. K. Denzin, & Y. S. Lincoln, The SAGE

handbook of qualitative research (pp. 379-399) (2nd ed.). Thousand Oaks: Sage.

Jeffery, R. W., Wing, R. R., Thorson, C., Burton L. R. (1998). Use of personal trainers

and financial incentives to increase exercise in a behavioral weight-loss program.

Journal of Consulting & Clinical Psychology, 66(5), 777-783. doi:

http://dx.doi.org/10.1037/0022-006x.66.5.777

Johnson, R. B., & Christensen, L. (2004). Educational research: Quantitative,

qualitative, and mixed approaches (2nd ed.). Boston, MA: Pearson Education. 111

Johnson, R. B., McGowan, M. W., & Turner, L. A. (2010). Grounded theory in practice:

Is it inherently a mixed method? Research in the Schools, 17(2), 65-78.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research

paradigm whose time has come. Educational Researcher, 33(7), 14-26. doi:

http://dx.doi.org/10.3102/0013189x033007014

Johnson, J. M., & Rowlands, T. (2012). The interpersonal dynamics of in-depth

interviewing. In J. F. Gubrium, J. A. Holstein, A. B. Marvasti, & K. D. McKinney

(Eds.), The SAGE handbook of interview research: The complexity of the craft

(2nd ed.) (pp. 99-113). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n7

Johnson, B., & Turner, D. W. (2003). Data collection strategies in mixed methods

research. In A. Tashakkori, & C. Teddlie, Handbook of mixed methods in social &

behavioral research (pp. 297-319). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781506335193

Kapborg, I., & Berterö, C. (2002). Using an interpreter in qualitative interviews: Does it

threaten validity? Nursing Inquiry, 9(1), 52-56. doi:

http://dx.doi.org/10.1046/j.1440-1800.2002.00127.x

Kelly, S. E. (2013). Qualitative interviewing techniques and styles. In I. Bourgealt, R.

Dingwall, & R. de Vries (Eds.), The sage handbook of qualitative methods in

health research (pp. 307-326). London: Sage doi:

http://dx.doi.org/10.4135/9781446268247.n17

King, N., & Horrocks, C. (2010). Interviews in qualitative research. London: Sage. 112

Kissi, J., Dainty, A., & Liu, A. (2012). Examining middle managers’ influence on

innovation in construction professional services firms: A tale of three innovations.

Construction Innovation, 12(1), 11-28. doi:

http://dx.doi.org/10.1108/14714171211197472

Koch, T. (1996). Implementation of a hermeneutic inquiry in nursing: Philosophy, rigour

and representation. Journal of Advanced Nursing, 24, 174-184. doi:

http://dx.doi.org/10.1046/j.1365-2648.1996.17224.x

Koch, T., & Harrington, A. (1998). Reconceptualizing rigour: The case for reflexivity.

Journal of Advanced Nursing, 28(4), 882-890. doi:

http://dx.doi.org/10.1046/j.1365-2648.1998.00725.x

Koelsch, L. E. (2013). Reconceptualizing the member check interview. International

Journal of Qualitative Methods, 12, 168-179.

Koro-Ljungberg, M. (2008). Validity and validation in the making in the context of

qualitative research. Qualitative Health Research, 18(7), 983-989. doi:

http://dx.doi.org/10.1177/1049732308318039

Kuckelman Cobb, A., & Nelson Hagemaster, J. (1987). Ten criteria for evaluating

qualitative research proposals. Journal of Nursing Education, 26(4), 138-143.

Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of

Chicago Press.

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University

of Chicago Press.

Kumanyika, S.K., Shults, J., Fassbender, J. (2005). Outpatient weight management in

African-Americans: The Healthy Eating and Lifestyle Program (HELP) study. 113

Preventive Medicine, 41(2), 488-502. doi:

http://dx.doi.org/10.1016/j.ypmed.2004.09.049

Kvale, S. (1995). The social construction of validity. Qualitative Inquiry, 1(1), 19-40.

doi: http://dx.doi.org/10.1177/107780049500100103

Kvale, S. (1996). Interviews: An introduction to qualitative research interviewing.

Thousand Oaks: Sage.

Kvale, S., & Brinkmann, S. (2009). Interviews: Learning the craft of qualitative research

interviewing (2nd ed.). Thousand Oaks: Sage.

Langberg, J. M., Vaughn, A. J., Williamson, P., Epstein, J. N. Girio-Herrera, E., &

Becker, S. P. (2011). Refinement of an organizational skills intervention for

adolescents with adhd for implementation by school mental health providers.

School Mental Health, 3(3), 143-155. doi: http://dx.doi.org/10.1007/s12310-

011-9055-8

Lather, P. (1986). Issues of validity in openly ideological research: Between a rock and a

soft place. Interchange, 17(4), 63-84. doi:

http://dx.doi.org/10.1007/bf01807017

Lather, P. (1993). Fertile obsession: Validity after poststructuralism. Sociological

Quarterly 34(4), 673-693. doi: http://dx.doi.org/10.1111/j.1533-

8525.1993.tb00112.x

Lather, P. (2001). Postmodernism, post-structuralism and post(critical) ethnography: of

ruins, aporias, and angels. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland &

L. Lofland (Eds.), Handbook of ethnography (pp. 477-492). London: Sage. doi:

http://dx.doi.org/10.4135/9781848608337.n33 114

Lather, P. (2006). Paradigm proliferation as a good thing to think with: Teaching research

in education as a wild profusion. International Journal of Qualitative Studies in

Education, 19(1), 35-57. doi:

http://dx.doi.org/10.1080/09518390500450144

LeCompte, M., & Goetz, J. P. (1984). Problems of reliability and validity in ethnographic

research. Review of Educational Research, 52, 31-60. doi:

http://dx.doi.org/10.3102/00346543052001031

Leininger, M. (1994). Evaluation criteria and critique of qualitative research studies. In J.

M. Morse (Ed.) Critical issues in qualitative research methods (pp. 95-115).

Thousand Oaks: Sage.

Lewis, J. (2009). Redefining qualitative methods: Believability in the fifth moment.

International Journal of Qualitative Methods, 8(2), 1-14.

Lenzo, K. (1995). Validity and self-reflexivity meet poststructuralism: Scientific ethos

and the transgressive self. Educational Researcher, 24(4), 17-23, 45. doi:

http://dx.doi.org/10.3102/0013189x024004017

Lincoln, Y. S. (1990). Campbell’s retrospective and a constructivist’s perspective.

Harvard Educational Review, 60(4), 501-504.

Lincoln, Y. S. (1995). Emerging criteria for quality in qualitative and interpretive

research. Qualitative Inquiry, 1(3), 275-289. doi:

http://dx.doi.org/10.1177/107780049500100301

Lincoln, Y. S. (2009). “What a long strange trip it’s been…”: Twenty-five years of

qualitative and new paradigm research. Qualitative Inquiry, 16(1), 3-9. doi:

http://dx.doi.org/10.1177/1077800409349754 115

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: Sage.

Lincoln, Y. S., Lynham, S. A., & Guba, E. G. (2011). Paradigmatic controversies,

contradictions, and emerging confluences, revisited. In N. K. Denzin, & Y. S.

Lincoln, The SAGE handbook of qualitative research (4th ed.). Thousand Oaks:

Sage.

LoBiondo-Wood, G., & Haber, J. (2013). Reliability and validity. In G. LoBiondo, & J.

Haber, Nursing research: Methods and critical appraisal for evidence-based

practice (pp. 289-309). St. Louis, MO: Mosby.

Long, T. & Johnson, M. (2000). Rigour, reliability and validity in qualitative research.

Clinical Effectiveness in Nursing, 4, 30-37. doi:

http://dx.doi.org/10.1054/cein.2000.0106

Marcus, B. H., Bock, B. C., Pinto, B. M., Forsyth, L. H., Roberts M. B., & Traficante, R.

M. (1998). Efficacy of an individualized, motivationally-tailored physical activity

intervention. Annals of Behavioral Medicine, 20(3), 174-180. doi:

http://dx.doi.org/10.1007/bf02884958

Marcus, B. H. , Napolitano, M. A., & King, A. C. (2007). Telephone versus print delivery

of an individualized motivationally tailored physical activity intervention: Project

STRIDE. Health Psychology, 26(4), 401-409. doi:

http://dx.doi.org/10.1037/0278-6133.26.4.401

Mason, J. (1996). Qualitative researching. London: Sage.

Maxwell, J. A. (1990). Up from positivism. Harvard Educational Review, 60(4), 497-

501. 116

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard

Educational Review, 62(3), 279-300. doi:

http://dx.doi.org/10.17763/haer.62.3.8323320856251826

Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rd ed.).

Thousand Oaks: Sage.

Maxwell, J. A. (2015). Expanding the history and range of mixed methods research.

Journal of Mixed Methods Research, 10(1), 12-27. doi:

http://dx.doi.org/10.1177/1558689815571132

Mays, N., & Pope, C. (1995). Rigour and qualitative research. British Medical Journal,

311, 109-112. doi: http://dx.doi.org/10.1136/bmj.311.6997.109

Melia, K. M. (2013). Recognizing quality in qualitative research. In I. Bourgeault, R.

Dingwall, & R. de Vries (Eds.) The sage handbook of qualitative methods in

health research (pp. 559-574). London: Sage. doi:

http://dx.doi.org/10.4135/9781446268247.n29

Mertens, D. M. (2009). Transformative research and evaluation. New York: Guilford.

Mertens, D. M. (2013). Mixed methods. In A. A. Trainor, & E. Graue (Eds.), Reviewing

qualitative research in the social sciences (pp.139-150). New York: Routledge.

doi: http://dx.doi.org/10.4324/9780203813324

Mertens, D. M., Bledsoe, K. L., Sullivan, M., & Wilson, A. (2010). Utilization of mixed

methods for transformative purposes. In A. Tashakkori & C. Teddlie (Eds.),

Mixed methods in social and behavioral research (pp. 193-214). Thousand Oaks,

CA: Sage. doi: http://dx.doi.org/10.4135/9781506335193.n8 117

Merton, R. K., & Kendall, P. L. (1946). The focused interview. American Journal of

Sociology, 51(6), 541-557. doi: http://dx.doi.org/10.1086/219886

Mesel, T. (2013). The necessary distinction between methodology and philosophical

assumptions in healthcare research. Scandinavian Journal of Caring Services,

27(3), 750-756. doi: 10.1111/j.1471-6712.2012.01070.x. doi:

http://dx.doi.org/10.1111/j.1471-6712.2012.01070.x

Messick, S. (1989). Meaning and values in test validation: The science and ethics of

assessment. Educational Researcher, 18(2), 5-11. doi:

http://dx.doi.org/10.3102/0013189x018002005

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from

persons’ responses and performances as scientific inquiry into score meaning.

American Psychologist, 50(9), 741-749. doi: http://dx.doi.org/10.1037/0003-

066x.50.9.741

Miczo, N. (2003). Beyond the “fetishism of words”: Considerations on the use of the

interview to gather chronic illness narratives. Qualitative Health Research,13,

469-490. doi: http://dx.doi.org/10.1177/1049732302250756

Miller, C., & Erickson, P. (2006). The politics of bridging scales and epistemologies:

Science and democracy in global environmental governance. In W. Reid, T.

Wilbanks, D. Capistrano, & F. Berkes (Eds.). Bridging scales and knowledge

systems: Concepts and applications in ecosystem assessment. Washington: Island

Press. 118

Miller, T. R., Baird, T. D., Littlefield, C. M., Kofinas, G., Chapin, F. S., & Redman, C. L.

(2008). Epistemological pluralism: Reorganizing interdisciplinary research.

Ecology and Society, 13(2), 46.

Miller, J., & Glassner, B. (1997). The ‘inside’ and the ‘outside’: Finding realities in

interviews. In D. Silverman Qualitative research: Theory, method and practice

(pp. 98-111). London: Sage.

Mills, C. W. (2000). The sociological imagination. Oxford: Oxford University Press.

Mishler, E. G. (1990). Validation in inquiry-guided research: The role of exemplars in

narrative studies. Harvard Educational Review, 60(4), 415-442. doi:

http://dx.doi.org/10.17763/haer.60.4.n4405243p6635752

Moncher, F. J., & Prinz, F. J. (1991). Treatment fidelity in outcome studies. Clinical

Psychology Review, 11, 247-266. doi: http://dx.doi.org/10.1016/0272-

7358(91)90103-2

Morgan, D. L. (2014). Integrating qualitative & quantitative methods: A pragmatic

approach. Thousand Oaks, CA: Sage.

Morgan, D. L. (2007). Paradigms lost and pragmatism regained: Methodological

implications of combining qualitative and quantitative methods. Journal of Mixed

Methods Research, 1(1), 48-76. doi:

http://dx.doi.org/10.1177/2345678906292462

Morse, J. M. (1994). Qualitative research: Fact or fantasy? In J. M. Morse Critical issues

in qualitative research methods (pp. 1-7). Thousand Oaks: Sage. 119

Morse, J. M. (1999). Myth #93: Reliability and validity are not relevant to qualitative

inquiry. Qualitative Health Research, 9(6), 717-718. doi:

http://dx.doi.org/10.1177/104973299129122171

Morse, J. M. (2012). Implications of interview type and structure in mixed-method

designs. In J. F. Gubrium, J. A. Holstein, A. B. Marvasti, & K. D. McKinney

(Eds.), The SAGE handbook of interview research: The complexity of the craft

(2nd ed.) (pp. 193-204). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n13

Morse, J. M., Barratt, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification

strategies for establishing reliability and validity in qualitative research.

International Journal of Qualitative Methods, 1(2), 1-19.

Morse, J. M., & Niehaus, L. (2009). Mixed method design: Principles and procedures.

Walnut Creek, CA: Left Coast Press.

Morse, J. M., Niehaus, L., Varnhagan, S., Austin, W., McIntosh, M. (2008). Qualitative

researchers’ conceptualizations of the risks inherent in qualitative interviews 1.

International Review of Qualitative Research, 1(2), 195-214.

Munhall, P. L. (2012a). The landscape of qualitative research in nursing. In P. A.

Munhall (Ed.), Nursing research: A qualitative perspective (pp. 3-31). Sudbury,

MA: Jones & Bartlett Learning.

Munhall, P. L. (2012b). Epistemology in nursing. In P. A. Munhall (Ed.), Nursing

research: A qualitative perspective (pp. 69-94). Sudbury, MA: Jones & Bartlett

Learning. 120

Muncey, T. (2009). Does mixed methods constitute a change in paradigm? In S. Andrew,

& E. J. Halcomb (Eds.), Mixed methods research for nursing and the health

sciences (pp. 13-30). Oxford: Blackwell Publishing. doi:

http://dx.doi.org/10.1002/9781444316490.ch2

Nascimento, F. A. C., Majumdar, A., & Jarvis, S. (2012). Nighttime approaches to

offshore installations in Brazil: Safety shortcomings experienced by helicopter

pilots. Accident Analysis & Prevention, 47, 64-74. doi:

http://dx.doi.org/10.1016/j.aap.2012.01.014

Nelson, J. A., Onwuegbuzie, A. J., Wines, L. A., & Frels, R. K. (2013). The therapeutic

interview process in qualitative research studies. The Qualitative Report, 18, 1-17.

Neuman, W. L. (2006). Social research methods: Qualitative and quantitative

approaches. (6th ed.). Boston, MA: Allyn & Bacon.

Neuman, W. L. (2011). Qualitative and quantitative approaches (7th ed.). Boston, MA:

Allyn & Bacon.

Nolan, M., & Behi, R. (1995). Alternative approaches to establishing reliability and

validity. British Journal of Nursing, 4(10), 587-590. doi:

http://dx.doi.org/10.12968/bjon.1995.4.10.587

O’Cathain, A. (2010). Assessing the quality of mixed methods research. In S. A.

Tashakkori, & C. Teddlie (Eds.), SAGE handbook of mixed methods research (2nd

ed.) (pp. 531-555). doi: http://dx.doi.org/10.4135/9781506335193.n21

O’Cathain, A., Murphy, E., & Nicholl, J. (2007). Why, and how, mixed methods research

is undertaken in health services research in England: A mixed methods study. 121

BMC Health Services Research, 7(85), 1-11. doi:

http://dx.doi.org/10.1186/1472-6963-7-85

O’Cathain, A., Murphy, E., & Nicholl, J. (2008). The quality of mixed methods studies in

health services research. Journal of Health Services Research & Policy, 13(2), 92-

98. doi: http://dx.doi.org/10.1258/jhsrp.2007.007074

Onwuegbuzie, A. J. (2003). Expanding the framework of internal and external validity in

quantitative research. Research in the Schools, 10(1), 71-89.

Onwuegbuzie, A. J., Daniel, L. G., & Collins, K. M. T. (2009). A meta-validation model

for assessing the score-validity of student teaching evaluations. Quality &

Quantity, 43, 197-209. doi: http://dx.doi.org/10.1007/s11135-007-9112-4

Onwuegbuzie, A. J., & Johnson, R. B. (2006). The validity issue in mixed research.

Research in the Schools, 13(1), 48-63.

Onwuegbuzie, A. J., Johnson, R. B., & Collins, K. M. T. (2011). Assessing legitimation

in mixed research: A new framework. Quality & Quantity, 45, 1253-1271. doi:

http://dx.doi.org/10.1007/s11135-009-9289-9

Onwuegbuzie, A. J., & Leech, N. L. (2007). Validity and qualitative research: An

oxymoron? Quality & Quantity, 41, 233-249. doi:

http://dx.doi.org/10.1007/s11135-006-9000-3

Padgett, D. K. (2012). Qualitative and mixed methods in public health. Thousand Oaks.

Sage. doi: http://dx.doi.org/10.4135/9781483384511

Perry, C.K., Rosenfeld, A.G., Bennett, J.A., Potempa, K. (2007). Heart-to-Heart:

promoting walking in rural women through motivational interviewing and group

support. Journal of Cardiovascular Nursing, 22(4), 304-312. doi: 122

http://dx.doi.org/10.1097/01.jcn.0000278953.67630.e3

Platt, J. (2012). The history of the interview. In J. F. Gubrium, J. A. Holstein, A. B.

Marvasti, & K. D. McKinney (Eds.), The SAGE handbook of interview research:

The complexity of the craft (2nd ed.) (pp. 9-26). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781452218403.n2

Poland, B. D. (2003). Transcription quality. In J. A. Holstein & J. F. Gubrium Inside

interviewing: New lenses, new concerns. Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781412984492.n13

Polit, D. F., & Tatano Beck, C. (2014). Essentials of nursing research: Appraising

evidence for nursing practice (8th ed.). Philadelphia, PA: Wolters Kluwer Health.

Popay, J., Rogers, A., & Williams, G. (1998). Rationale and standards for the systematic

review of qualitative literature in health services research. Qualitative Health

Research, 8(3), 341-351. doi:

http://dx.doi.org/10.1177/104973239800800305

Porter, S. (2007). Validity, trustworthiness and rigour: Reasserting realism in qualitative

research. Journal of Advanced Nursing, 60(1), 79-86. doi:

http://dx.doi.org/10.1111/j.1365-2648.2007.04360.x

Pozehl, B. J., Duncan, K., Hertzog, M., McGuire, R., Norman, J. F., Artinian, N. T., &

Keteyian, S. J. (2014). Study of adherence to exercise in heart failure: The

HEART camp trial protocol. BMC Cardiovascular Disorders, 14, 1-11. doi:

http://dx.doi.org/10.1186/1471-2261-14-172

Reichard, C. S., & Cook, T. D., (1979). Beyond qualitative versus quantitative methods.

In T. D. Cook & C. S. Reichard (Eds.), (pp. 7-32). Newbury Park, CA: Sage. 123

Resnick, B., Bellg, A. J., Borrelli, B., Breger, R. Hecht, J., Sharp, D. L.,…Czajkowski, S.

(2005). Examples of implementation and evaluation of treatment fidelity in the

bcc studies: Where we are and where we need to go. Annals of Behavioral

Medicine, 29, 46-54. doi: http://dx.doi.org/10.1207/s15324796abm2902s_8

Resnick, B., Inguito, P., Orwig, D., Yu Yahiro, J., Hawkes, W., Werner,

M.,….Magaziner, J. (2005). Treatment fidelity in behavior change research.

Nursing Research, 54(2), 139-143. doi: http://dx.doi.org/10.1097/00006199-

200503000-00010

Resnick, B., Galik, E., Pretzer-Aboff, I., Gruber-Baldini, A. L., Russ, K., Cayo, J., &

Zimmerman, S. (2009). Treatment fidelity in nursing home research: The res-care

intervention study. Research in Gerontological Nursing, 2(1), 30-38. doi:

http://dx.doi.org/10.3928/19404921-20090101-09

Richardson, L. (2000). New writing practices in qualitative research. Sociology of Sport

Journal, 17, 5-20.

Robb, S. L., Burns, D. S., Docherty, S. L., & Haase, J. E. (2011). Ensuring treatment

fidelity in a multi-site behavioral intervention study: Implementing NIH behavior

change consortium recommendations in the smart trial. Psychooncology, 20(11),

1-19. doi: http://dx.doi.org/10.1002/pon.1845

Roberts, P., & Priest, H. (2006). Reliability and validity in research. Nursing Standard,

20(44), 41-45. doi: http://dx.doi.org/10.7748/ns2006.07.20.44.41.c6560

Rolfe, G. (2006). Validity, trustworthiness and rigour: Quality and the idea of qualitative

research. Journal of Advanced Nursing, 53(3), 304-310. doi:

http://dx.doi.org/10.1111/j.1365-2648.2006.03727.x 124

Roulston, K. (2011). Interview ‘problems’ as topics for analysis. Applied ,

32(1), 77-94. doi: http://dx.doi.org/10.1093/applin/amq036

Roulston, K. (2012). The pedagogy of interviewing. In J. F. Gubrium, J. A. Holstein, A.

B. Marvasti, & K. D. McKinney (Eds.), The SAGE handbook of interview

research: The complexity of the craft (2nd ed.) (pp. 61-74). Thousand Oaks: Sage.

doi: http://dx.doi.org/10.4135/9781452218403.n5

Roulston, K., deMarrais, K., & Lewis, J. B. (2003). Learning to interview in the social

sciences. Qualitative Inquiry, 9(4), 643-668. doi:

http://dx.doi.org/10.1177/1077800403252736

Roulston, K., McClendon, V. J. Thomas, A., Tuff, R., Williams, G., & Healy, M. F.

(2008). Developing reflective interviewers and reflexive researchers. Reflective

Practice, 9(3), 231-243. doi: http://dx.doi.org/10.1080/14623940802206958

Rubin, H., & Rubin, I. (2005). Qualitative interviewing: The art of hearing data.

Thousand Oaks: Sage. doi: http://dx.doi.org/10.4135/9781452226651

Saldaña, J. (2003). Longitudinal qualitative research: Analyzing change through time.

Walnut Creek, CA: AltaMira.

Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Thousand

Oaks: Sage.

Sale, J. E., & Brazil, K. (2004). A strategy to identify critical appraisal criteria for

primary mixed-method studies. Quality & Quantity, 38, 351-365. doi:

http://dx.doi.org/10.1023/b:ququ.0000043126.25329.85 125

Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in

Nursing Science, 8(3), 27-37. doi: http://dx.doi.org/10.1097/00012272-

198604000-00005

Sandelowski, M. (1993). Rigor or rigor mortis: The problem of rigor in qualitative

research revisited. Advanced Nursing Science, 16(2), 1-8. doi:

http://dx.doi.org/10.1097/00012272-199312000-00002

Sandelowski, M. (1996). Using qualitative methods in intervention studies. Research in

Nursing & Health, 19, 359-364. doi: http://dx.doi.org/10.1002/(sici)1098-

240x(199608)19:4<359::aid-nur9>3.0.co;2-h

Sandelowski, M. (2002). Reembodying qualitative inquiry. Qualitative Health Research,

12(1), 104-115. doi: http://dx.doi.org/10.1177/1049732302012001008

Sandelowski, M., & Barroso, J. (2002). Reading qualitative studies. International Journal

of Qualitative Methods, 1(1), 74-103.

Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry:

Interpretivism, hermeneutics, and social constructionism. In N. K. Denzin, & Y.

S. Lincoln (Eds.), Handbook of qualitative research (2nd ed.) (pp. 189-213).

Thousand Oaks, CA: Sage.

Schwandt, T. A. (2001). Dictionary of qualitative inquiry (2nd ed.). Thousand Oaks: Sage.

Seidman, I. (2006). Interviewing as qualitative research: A guide for researchers in

education and the social sciences (3rd ed.). New York: Teachers College Press.

Seale, C. (1999a). Quality in qualitative research. Qualitative Inquiry, 5(4), 465-478. doi:

http://dx.doi.org/10.1177/107780049900500402

Seale, C. (1999b). The quality of qualitative research. London: Sage. 126

Secker, J., Wimbush, E., Watson, J., & Milburn, K. (1995). Qualitative methods in health

promotion research: Some criteria for quality. Health Education Journal, 54, 74-

87. doi: http://dx.doi.org/10.1177/001789699505400108

Selltiz, C., Wrightsman, L. S., & Cook, S. W. (1976). Research methods in social

relations (3rd ed.). New York: Holt, Rinehart, & Winston.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-

experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Silverman, D. (2000). Doing qualitative research: A practical handbook. London: Sage.

Slevin, E., & Sines, D. (2000). Enhancing the truthfulness, consistency and transferability

of a qualitative study: Utilising a manifold of approaches. Nurse Researcher, 7(2),

79-89. doi: http://dx.doi.org/10.7748/nr2000.01.7.2.79.c6113

Smith, J. K. (1990). Alternate research paradigms and the problem of criteria. In E. G.

Guba (Ed.), The Paradigm Dialogue (pp. 167-187). Newbury Park: Sage.

Smith, J. K., & Deemer, D. K. (2000). The problem of criteria in the age of relativism. In

N. K. Denzin, & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed.)

(pp. 877-896). Thousand Oaks, CA: Sage.

Sommer Harrits, G. (2011). More than method? A discussion of paradigm differences

within mixed methods research. Journal of Mixed Methods Research, 5(2), 150-

166. doi: http://dx.doi.org/10.1177/1558689811402506

Song, M., Sandelowski, M., & Happ, M. B. (2010). Current practices and emerging

trends in conducting mixed methods intervention studies in the health sciences. In

A. Tashakkori & C. Teddlie (Eds.), Sage Handbook of mixed methods in social 127

and behavioral research (2nd ed.) (pp. 725-747). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781506335193

Song, M., Sandelowski, M., & Happ, M. B. (2010). Development of a tool to assess

fidelity to a psycho-educational intervention. Journal of Advanced Nursing, 66(3),

673-682. doi: http://dx.doi.org/10.1111/j.1365-2648.2009.05216.x

Sparkes, A. C. (2001). Myth 94: Qualitative researchers will agree about validity.

Qualitative Health Research, 11(4), 538-552. doi:

http://dx.doi.org/10.1177/104973230101100409

Spradley, J. (1979). The ethnographic interview. Fort Worth, TX: Harcourt Brace

Jovanovich College Publishishers.

Spillane, J, Stiziel Pareja, A., Dorner, L., Barnes, C., May, H., Huff, J., & Camburn, E.

(2010). Mixing methods in randomized controlled trials (RCTs): Validation,

contextualization, triangulation, and control. Educational Assessment, Evaluation

and Accountability, 22(1), 5-28. doi: http://dx.doi.org/10.1007/s11092-009-

9089-8

St. Pierre, E. A. (1997). Methodology in the fold and the irruption of transgressive data.

Qualitative Studies in Education, 10(2), 175-189. doi:

http://dx.doi.org/10.1080/095183997237278

Stenbecka, C. (2001). Qualitative research requires quality concepts of its own.

Management Decision, 39(7), 551-555. doi:

http://dx.doi.org/10.1108/eum0000000005801

Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and

procedures for developing grounded theory (2nd ed.). Thousand Oaks: Sage. 128

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory

procedures and techniques. Thousand Oaks: Sage.

Streubert, H. J. (2013). Appraising qualitative research. In G. LoBiondo-Wood, & J.

Haber (Eds.), Nursing research: Methods and critical appraisal for evidence-

based practice (8th ed.) (pp. 132-158). St. Louis, MO: Mosby.

Talmage, J. B. (2012). Listening to, and for, the research interview. In J. F. Gubrium, J.

A. Holstein, A. B. Marvasti, & K. D. McKinney (Eds.), The SAGE handbook of

interview research: The complexity of the craft (2nd ed.) (pp. 295-304). Thousand

Oaks: Sage. doi: http://dx.doi.org/10.4135/9781452218403.n21

Tashakkori, A., & Teddlie, C. (2003).The past and future of mixed methods research:

From data triangulation to mixed model designs. In A. Tashakkori, & C. Teddlie

(Eds.), Sage Handbook of mixed methods in social and behavioral research (pp.

671-701). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781506335193

Tatano Beck, C. (1993). Qualitative research: The evaluation of its credibility, fittingness,

and auditability. Western Journal of Nursing Research, 15(2), 263-266. doi:

http://dx.doi.org/10.1177/019394599301500212

Teddlie, C., & Tashakkori, A. (2012). Common “core” characteristics of mixed methods

research: A review of critical issues and call for greater convergence. American

Behavioral Scientist, 56(6), 774-788. doi:

http://dx.doi.org/10.1177/0002764211433795 129

Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research:

Integrating quantitative and qualitative approaches in the social and behavioral

sciences. Thousand Oaks: Sage.

Thomas, E., & Magilvy, J. K. (2011). Qualitative rigor or research validity in qualitative

research. Journal for Specialists in Pediatric Nursing 16, 151-155. doi:

http://dx.doi.org/10.1111/j.1744-6155.2011.00283.x

Tobin, G. A., & Begley, C. M. (2004). Methodological rigour within a qualitative

framework. Journal of Advanced Nursing, 48(4), 388-396. doi:

http://dx.doi.org/10.1111/j.1365-2648.2004.03207.x

Toobert, D. J., Strycker, L. A., Glasgow, R. E., Barrera, M., Angell, K. (2005). Effects of

the Mediterranean lifestyle program on multiple risk behaviors and psychosocial

outcomes among women at risk for heart disease. Annals of Behavioral Medicine,

29(2), 128-137. doi: http://dx.doi.org/10.1207/s15324796abm2902_7

Thorne, S. (1997). The art (and science) of critiquing qualitative research. In J. M. Morse

(Ed.), Completing a qualitative project: Details and dialogue (pp. 117-132).

Thousand Oaks: Sage.

Thorne, S. E. (1991). Methodological orthodoxy in qualitative nursing research: analysis

of the issues. Qualitative Health Research, 1(2), 178-199. doi:

http://dx.doi.org/10.1177/104973239100100203

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative

research. Qualitative Inquiry, 16(10), 837-851. doi:

http://dx.doi.org/10.1177/1077800410383121 130

Trainor, A. A. (2013). Interview research. In A. A. Trainor, & E. Graue (Eds.), Reviewing

qualitative research in the social sciences (pp.125-138). New York: Routledge.

doi: http://dx.doi.org/10.4324/9780203813324

Trainor, A. A., & Graue, E. (2013). Introduction. In A. A. Trainor, & E. Graue (Eds.),

Reviewing qualitative research in the social sciences (pp. 1-10). New York:

Routledge. doi: http://dx.doi.org/10.4324/9780203813324

Twinn, S. (2003). Status of mixed methods research in nursing. In A. Tashakkori, & C.

Teddlie (Eds.), Sage Handbook of mixed methods in social and behavioral

research (pp. 541-556). Thousand Oaks: Sage. doi:

http://dx.doi.org/10.4135/9781506335193

Turner, D. W. (2010). Qualitative interview design: A practical guide for novice

investigators. The Qualitative Report, 15(3), 754-760.

Warren, C. A. B. (2002). Qualitative interviewing. In J. F. Gubrium & J. A. Holstein

Handbook of Interview Research: Context and method (pp. 83-101). Thousand

Oaks, CA: Sage. doi: http://dx.doi.org/10.4135/9781412984492.n6

Wenger, G. C. (2002). Interviewing older people. In J. F. Gubrium & J. A. Holstein

Handbook of Interview Research: Context and method (pp. 111-130). Thousand

Oaks, CA: Sage. doi: http://dx.doi.org/10.4135/9781412984492.n6

Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research.

Qualitative Health Research, 11(4), 522-537. doi:

http://dx.doi.org/10.1177/104973201129119299

Winter, G. (2000). A comparative discussion of the notion of ‘validity’ in qualitative and

quantitative research. The Qualitative Report, 4(3), 1-11. 131

Wolcott, H. F. (1990). On seeking—and rejecting—validity in qualitative research. In E.

W. Eisner, & Peshkin, A. (Eds.). Qualitative inquiry in education: The continuing

debate (pp. 121-152). New York: Teachers College Press.

Yanchar, S. C. & Williams, D. D. (2006). Reconsidering the compatibility thesis and

eclecticism: Five proposed guidelines for method use. Educational Researcher,

35(9), 3-12. doi: http://dx.doi.org/10.3102/0013189x035009003

Yancey, A. K., McCarthy, W. J., Harrison, G. G., Wong, W. K., Siegel, J. M., & Leslie,

J. (2006). Challenges in improving fitness: results of a community-based,

randomized, controlled lifestyle change intervention. Journal of Women’s Health,

15(4), 412-429. doi: http://dx.doi.org/10.1089/jwh.2006.15.412

Yonge, O., & Stewin, L. (1988). Reliability and validity: misnomers for qualitative

research. The Canadian Journal of Nursing Research, 20(2), 61-67.

Zohrabi, M. (2013). Mixed method research: Instruments, validity, reliability and

reporting findings. Theory and Practice in Language Studies, 3(2), 254-262. doi:

http://dx.doi.org/10.4304/tpls.3.2.254-262

Zoppi, K. A., & Epstein, R. M. (2002). Interviewing in medical settings. In J. F. Gubrium

& J. A. Holstein Handbook of Interview Research: Context and method (pp. 355-

384). Thousand Oaks, CA: Sage. doi:

http://dx.doi.org/10.4135/9781412973588.d23

132

Appendices

133

Appendix A: Interview Fidelity Ideal Matrix

Interview Example Philosophical Design Training Maintenance Evaluation Fidelity from Context Phase Phase Phase Phase Ideal Hearts on Track Project

134

Appendix B: Code List

135

136

Appendix C: Interview Fidelity Ideals Summary

Interview Fidelity Ideals Details Validity Value Research Question composition, Question writing, revision, and elimination; Contributions revision, & elimination, purposeful participant selection; data Participant selection generation through written and spoken data. Memoing

Interviewer- Interviewer & participant Clarity in interviewer and participant roles Participant roles ensures effective data collection. Trusting Association Interviewer reflexivity relationships facilitated trustworthy data. Power Bias Participant Health accommodation Where participants’ needs for health and Accommodation Content accommodation safety were respected, rich data were gleaned.

Process & Probing & follow-up Elements pertaining to the conduct of the Procedures questions interview including question phrasing, Interview structure probing, pacing, turn taking, and nonverbal Interview management messaging were part of this dimension. Training materials Standardized training of interviewing staff also helped shape the focus of the fidelity efforts.

Data Management Record keeping Accurate record keeping, minimization of Dimensions missing data, and integrity of equipment all contribute to quality.

137

Appendix D: Interview Manual

Qualitative Interviewers’ Training Manual

Hearts on Track

Trainee Reference Materials

Compiled by Amanda L. Garrett, M.S.

Office of Qualitative & Mixed Methods Research

April 2013 138

TABLE OF CONTENTS

Unit 1—QUALITATIVE RESEARCH

Basics of Qualitative Research Reading: What is Qualitative Research? (Merriam, 2009) PowerPoint: What is Qualitative Research? Activity: Introductory Interview Workshop

Listening Activity: Listening Competency Scale Reading: Listening to, and for the Research Interview (Talmage, 2012) Activity: 4 Lenses of Listening (James, 2012) Summary: Skillful Listening, Active Listening, Genuine Listening, & Deep Listening (Eliot, 2011) Reflexivity PowerPoint: Researcher Reflexivity Summary: Lenses for Reflexive Interviewers Activity: Researcher Reflexivity Paragraph

Unit 2—QUALITATIVE INTERVIEWS

Conducting Interviews Summary: Interviewer as Miner or Traveler (Kvale & Brinkmann, 2009) Reading: Doing Qualitative Interviews (Hatch, 2002) Reading: Conducting Effective Interviews (Merriam, 2009) PowerPoint: Interviews Readings: The Interview Subject & Interviewer Qualifications (Kvale & Brinkmann, 2009) Activity: How to Do a Research Interview (Gibbs, 2011)

Interview Conversations & Questions Reading: The Responsive Interview as an Extended Conversation p. 95-107 (Rubin & Rubin, 2012) Summary: Benefits of Note-taking while Interviewing (Eliot, 2011) PowerPoint: Interview Questions Summary: Ten Clarifying Techniques to Use in Qualitative Interviewing (Eliot, 2011) Reading: The Responsive Interview as an Extended Conversation p. 107-114 (Rubin & Rubin, 2012)

139

Interview Issues & Ethics Reading: Conversational Partnerships p. 71-90(Rubin & Rubin, 2012) Reading: Managing the Interviewer Self (Lillrank, 2012) Activity: Problem-solving Session for Interview Glitches

Special Considerations Summary: Interviewing Older People (Wenger, 2003) Summary: Race, Subjectivity, and the Interview Process (Dunbar, Rodriguez, & Parker, 2003) Summary: Interview Location and its Social Meaning (Herzog, 2012) Summary: Telephone Interviewing (Rubin & Rubin, 2012; Shuy, 2003) Summary: The Value of Interviewing on Multiple Occasions or Longitudinally (Grinyer & Thomas, 2012)

Unit 3—QUALITATIVE INTERVIEWS IN THE HEALTH SCIENCES

Interviews in the Health Sciences Summary: Unstructured Interviews and Health Research (Low, 2007)

Hearts on Track Interviews Reading: Hearts on Track Interview Scripts & Questions Activity: Mock Interview with Hearts on Track Interview Questions

140

Appendix E: Booster Session Materials

Booster Session #2 Agenda

Participants:

6-13-14

 How are things going in general?  Length of interviews o 20 minutes o Aim for shorter interviews due to increased transcription costs. Please do your best to keep the interview focused and on task.  Missing Data o Doesn’t seem to be an issue anymore.  Telephone Interviews o Has anyone conducted a phone interview? o Yes, xxxx in Detroit has. There was some concern over the quality of the recording, but this was not a problem.  Purposeful sampling of additional 10-15 participants per site will be greatly assisted by the qualitative interviewers. o Look for articulate individuals, unusual individuals or things that are said, people who have overcome other conditions/comorbidities, things you haven't heard before, emerging patterns, etc. These may become factors or things that we consider and select participants based upon. Contact me via email with a short description and participant number. I will make a note of this, follow up, and carry this forward in other research team meetings. Thank you so much.  18 Month Exit Interview Protocol o We will use a different question protocol as participants exit the program so that we can learn as much as possible. o If you have any ideas for questions, now is the time to make a suggestion. o Let’s review the questions. Some are the same-for consistency & longitudinal analysis. We begin with the same question. o Item ideas o #9 values and attitudes may be hard to process, perhaps how important is exercise or do you value exercise o #8 participants may feel a need to rate this question even though there is no scale provided  Feedback from Interviews (see separate document)  Wiki o The wiki will continue to serve as a repository of information on how we made decisions as we collected the qualitative data. You are welcome to read this material, contribute, and comment as you would like. A short demonstration was provided so that all participants can access the information.

______

141

Interview Specific Feedback

June 2014

 I listened to 7 interviews (at both sites, across all interviewer, and all time points 3 mo., 6 mo., & 12 mo. Interviews) in the time period between Dec 2013 and May 2014 for these comments.  We are moving past many of the 3-month interviews and on past many of the 6 month interviews to the 12 month interviews. Soon we will be conducting our first 18 month interviews, which will be exit interviews. These interviews will be different, so this will be a change for you and for the participant.  Note sheets-Please add your name to these sheets. The memos have been helpful. Keep these up.  Coach & Session Leader Note Sheets-- do include the correct interval 24 months. Don’t just write 6 months because we have no way of comparing that across sites and across interviewers. This information should be in RedCap or whatever scheduling program is used. This also helps us in terms of naming the files and retrieving the files as well.

It is also important to know what # interview it is with each coach and session leader, especially if it is a first interview.

 Do cover all questions and probes. This can be difficult as you become so familiar with the interview protocol.

Things that are going well

Niceties

“Are you comfortable?”

“R: Thank you for coming in when it’s this cold!”

Jovial mood

Laughter

Encouragement

R: That’s amazing. [walking around a park]

142

Summarizing

R: Um, I’ve taken from what you’re telling me that physically, exercise makes you feel great.

Listening

Right, Yea

I don’t see a lot of interruptions or putting words into participant mouths.

Good use of probes

Q1.) following up/probes

R: OK. So, can you tell me about what you’re currently doing, relative to exercise. What’s your routine? What kind of things are you able to do?

P: Well, I uh am fortunate to be a part of the program. I uh typically will exercise about well, a little bit less than the 150 target. Uh maybe about, between 120 and 150. Um most of, well, half of those uh minutes are achieved here, um, uh on Friday, on Fridays and after the lectures on Thursday and then the balance I um, I work out at home.

R: OK. What kind of things do you do here and then at home?

P: Well generally I will do the treadmill, and the new step.

R: OK.

P: Right, I like those. I haven’t um progressed to the uh resistance area as yet, but uh, typically uh treadmill and um new step.

R: What about at home?

P: Um well I have a bike--

R: OK. A stationary bike?

P: Stationary bike, and uh a new step also.

R: Oh, OK. Good. Is this, what you’re doing now different than what you were doing before you joined the program, or--? 143

P: Yes, yes. Uh this has, the program has encouraged me to become a little more active. Um and exercise more, uh it’s been a--a real good incentive to uh you know, to do that.

R: Do you exercise with anyone?

P: No.

R: By yourself?

P: Well, uh when I’m here of course, but uh at home, typically it’s alone.

R: So how do you think it’s going for you, the past 3 months?

P: Well I think it’s um been well. I think--I think, well I like the program and I think it’s been productive with respect to you know, certain objectives, but uh yeah I think it’s-- it’s been really, really, well appreciated.

R: You said it, something about certain objectives. Can you tell me a little more about what you mean by that?

P: Well, the goal here is 150.

R: OK.

P: 150 minutes a week, that’s about 30 minutes a day. But sometimes other commitments prohibit me from you know, meeting those goals.

R: [inaudible]

P: Yeah.

R: Fit it all in, huh?

P: Uh yes.

Clarification

R: So you’re saying, if I’m understanding you correctly, that the coach that you have here would also be able to contact you during scheduled exercise sessions at home.

Making the participant feel valued

R: Well that’s very valuable, thank you so much for sharing those thoughts. 144

Following the participants’ lead (brought up at another time, not during barrier question).

R: That was a barrier, the transportation for you.

P: Transportation, right.

Novel questioning

R: So do you think the exercise has helped you so that you are prepared to walk them [dogs]?

Things we could do more of…

Details, details, details around strategies & improvements

 Especially overcoming barriers  What are you doing to encourage yourself/stay on track?  Feel physically, emotionally, life improved

Continued carefulness about interjecting your own ideas. Do more listening than talking.

o Balance of suggesting an idea and suggesting data

145

Appendix F: Interview Guide

Participant # 3mo. 6 mo. 12mo. 18mo. In Person OR Phone Date Time Location

Introduction Script: Thank you for taking time to meet with me to discuss your exercise. Through this study, we aim to understand what exercise is like for a person with a heart condition. I am meeting with you because you are an expert on the topic of exercising with a heart condition. Your honest thoughts and opinions are what we are interested in. Please do not feel that you must have been exercising to your standards or the standards of the study for your thoughts and opinions to matter. I am not here to judge you or your exercise behavior. It is important that we get real, truthful answers about how your exercise experience is going so our findings are accurate. The results from this study will be used to inform others about what it is like to exercise with a heart condition and allow us to make solid recommendations about exercise to improve the lives of others with similar heart conditions.

The interview will last about 30 minutes. I will be recording and transcribing what we say today. Your information will be kept confidential. Are you ready to begin?

Interview Questions Notes ! !4. '4 +4 .4 .44

T 4..

U . 4.. .

146

Participant # 3mo. 6 mo. 12mo. 18mo. In Person OR Phone Date Time Location V

W .4 ..

X .. ... .4...

Y !.

Z !

[ !.4 {4.. .

147

Participant # 3mo. 6 mo. 12mo. 18mo. In Person OR Phone Date Time Location R 'S.. 44 ..

.

Post-interview Reflections Interviewer Initials Analytic Memo (Factors for Exercise Adherence; Question revision)

Reflective Memo (Self-improvement strategies as interviewer)

Conclusion Script: From all of us on the research team, we sincerely thank you for your insights and your time. I look forward to our next visit together (said in all but the last interview).

148

Appendix G: Interview Review Guide

1. Examine the timeframe of interviews and ascertain which, how many, and what type, where, the interviews took place, and who conducted them 2. If any of these have been transcribed, read them. If not, listen to the audio recordings 3. Pay attention for things that are going well and things that can be improved upon within the context of the interview. a. Examples-Elements of rapport and ease b. Interviewer and participant roles understood c. Question phrasing is clear, open-ended, & focused d. Follow-up questions/probes are used for clarity and detail e. Response suggestion is limited (participant is ill/confused/stuck) f. Attentive listening/hearing g. Interviewer acknowledgement rather than agreement h. Effective use of pause and silence i. Appropriate pace maintained j. Participant not allowed to wander off topic for extensive periods k. Suitable praise offered to participants achieving study or personal goals, no judgment rendered for those not reaching benchmarks. l. Empathy, not sympathy, given to those with setbacks of ill health. m. Note taking/memoing on interview note sheets regarding questions, non- verbal cues, analysis, and self-reflection of interview skills. n. Adequate interview length attained; all questions answered. o. Recording/transcript clear; minimal passages inaudible/not transcribed; recording/transcript does not continue into another interview. 4. If there is a transcript available, cut and paste examples of these. If only audio is available, make paraphrased notes of these to serve as examples. 5. Customize a booster session on the topics that need to be addressed.