<<

Psychologia, 2013, 56, 89–112

INVITED ARTICLE COMPETING MODELS OF EVIDENCE AND CORROBORATING RESEARCH STRATEGIES: SHAPING THE LANDSCAPE OF RESEARCH IN THE ERA OF EVIDENCE-BASED PRACTICE

Shigeru IWAKABE

Ochanomizu University, Tokyo, Japan

Empirically supported treatments have brought much enthusiasm for establishing a firm empirical basis for clinical practice. However controversies about what constitutes evidence in psychotherapy and what research methodologies qualify as sufficiently rigorous to produce such evidence also abound. Although most psychotherapy researchers are in agreement with the underlying rationale that psychotherapy should be based on rigorous scientific research, many are in disagreement as to what constitutes ideal scientific practice in evidence-based psychotherapy; alternative frameworks of evidence have been proposed. This paper first reviews different models of evidence, namely empirically supported treatments, empirically supported psychotherapy relationships, research informed principles of therapeutic change, and evidence-based practice in , and discusses their relative strengths and limitations. Secondly, it illustrates three research topics and corresponding strategies that both supplement these evidence models and deepen an understanding of process and outcome in psychotherapy. These three areas are: mechanisms of change, systematic case studies, and researcher-practitioner collaboration.

Key words: evidence-based practice, psychotherapy research, empirically supported treatments

The importance of research in psychotherapy is recognized and agreed upon by most psychotherapists. From the first psychological clinic formed by Lightner Witmer (1907/ 1996), the application of scientific principles was at the forefront of psychological services. The Boulder model of training, which is the most well-known and influential model of doctoral training in psychology, emphasizes that psychologists should be trained as both scientists and practitioners, and that psychological practice is in essence the rigorous and objective application of scientifically derived knowledge (Shakow et al., 1947). In reality, however, a well-recognized schism between research and practice exists (Beutler, 2009; Dawes, 1994; Frazier, Formoso, Birman, & Atkins, 2008; Soldz & McCullough, 2000; Talley, Strupp, & Butler, 1993). In the most recent shift toward evidence-based treatments and practice, this schism has come into even sharper focus (Elliott, 1998; Kazdin, 2008; Norcross, Beutler, & Levant, 2005; Wampold, 2001). Empirically supported treatments (ESTs), which are the forerunners in the recent

Correspondence concerning this article should be addressed to: Shigeru Iwakabe, Associate Professor, Ochanomizu University, Otsuka 2-1-1, Bunkyoku, Tokyo, 112-8610, JAPAN (e-mail: iwakabe.shigeru@ocha. ac.jp).

89 90 IWAKABE evidence-based movement, have stirred up debates between practitioners, researchers, and health-care policy makers (Norcross et al., 2005; Roth & Fonagy, 2004). The core issue lies in how to determine what constitutes evidence for the effectiveness of psychotherapy. ESTs exclusively prize randomized clinical/controlled trials (RCTs) as the gold standard for determining evidence for the effectiveness of an intervention. This narrow methodological definition of evidence has been criticized as a threat to the development of the science of and psychotherapy rather than progress toward the establishment of empirically based psychotherapy (Bohart, O’Hara, & Leitner, 1998; Elliott, 1998; Wampold, 2001). As a result alternative views of evidence, ones that include findings from a wider range of psychotherapy research beyond RCTs, have been proposed. Although these different models or frameworks of evidence are ecologically more valid and clinically more relevant, as well as more reflective of findings from a wide variety of therapies (e.g., theoretical orientations, treatment modes, etc.), they still leave out some variations of psychotherapy research that are crucial to the development of psychotherapy and its science. Psychotherapy research concerns not only confirming presumed links between intervention and outcome, but also the uncovering how psychotherapy works, as well as discovering and describing new phenomena (Elliott, 2010; Greenberg, 1991; Mahrer, 1988). In order to change a long-strained relationship between research and practice in psychotherapy, it is also important to develop research methodologies that make scientific findings clinically more relevant, as well as exploring ways in which the collaboration and partnership of researchers and practitioners in the evidence based movement is realized (Castonguay, 2011; Safran, Abreu, Ogilvie, & DeMaria, 2011). The goal of this paper is to review four major models and frameworks of evidence and to propose three areas of psychotherapy research that augment these four models. What I mean by “models of evidence” is that a model of evidence: (1) defines what constitutes “evidence”; (2) defines what needs to be empirically validated; (3) delineates methodological research guidelines and standards that need to be satisfied to qualify as scientific research; and (4) provides guidelines for applying empirically derived knowledge in practice. The four models of evidence that will be discussed are: Empirically Supported Treatments (The Division 12 Task Force of the American Psychological , 1995), Empirically Supported Psychotherapy Relationship (Norcross & Wampold, 2011), Research Informed Principles of Change (Castonguay & Beutler, 2005a), Evidence-Based Practice in Psychology (EBPP; American Psychological Association [APA] Presidential Task Force on Evidence-Based Practice, 2006). These four models delineate the scope of research evidence and conceptualize evidence-based practice differently. It is hoped that comparison of these four models will demonstrate the relative strengths of each model and their resulting clinical benefits and limitations, while identifying important empirical questions that require research strategies other than those emphasized in the above models. There are other attempts to define guidelines and frameworks for evidence-based practice within and outside the United States. For example, APA Division 17 (Society of ) offered a framework of empirically supported treatment within MODELS OF EVIDENCE 91 counseling psychology (Wampold, Lichtenberg, & Waehler, 2002). APA Division 32 (Society for ) also provided guidelines for the provision of humanistic psychological services (Task Force for the Development of Guidelines for the Provision of Humanistic Psychosocial Services, 1997). In Great Britain, the Guidelines Development Committee of the British Psychological Society (Department of Health, 2001) released Treatment Choice in Psychological Therapies and Counselling: Evidence- Based Practice Guidelines. In Germany, the federal government commissioned an expert report on the effectiveness of psychotherapy to guide the revisions of legal regulation of psychotherapy (Strauss & Kaechele, 1998). Although these are also notable, this paper focuses on the previous four models because of the contrasting views they present, illuminating different aspects of psychotherapy and its scientific evidence. Secondly, three strategies of psychotherapy research will be reviewed. First, research that focuses on mechanisms of change in and out of therapy will be reviewed. Second, systematic case study research describing the process of both successful and unsuccessful therapy and identifying therapeutic factors within the context of a specific case will be illustrated. Finally, research projects that explore the researcher-practitioner relationship and that encourage collaboration between researchers and practitioners will be examined.

FOUR MODELS OF EVIDENCE

Empirically Supported Treatments Empirically supported treatments (ESTs) were initiated by the Division 12 Task Force of the American Psychological Association (1995), which recognized the need to establish effective of proven efficacy “in the heyday of biological psychiatry” (p. 3). Pressure to prove its cost effectiveness against more accepted biological psychiatry was mounting, as reducing the service cost became one of the most primary tasks of health care in US. The establishment of ESTs was a response to this social, economic, and professional need. The general benefits of psychotherapy and psychological interventions were established by large-scale meta-analytic studies conducted since the 1970s (Lambert & Ogles, 2004; Lipsey & Wilson, 1993; Smith & Glass, 1977; Smith, Glass, & Miller, 1980; Wampold, 2001). However, the perception that psychological treatment was less effective than pharmacological treatment remained. Therefore, one of the goals was to identify treatments for particular disorders with evidence of efficacy comparable to that for medications (Chambless et al., 1996, 1998). ESTs are essentially treatment guidelines that provides specific recommendations about treatments to be offered to patients by elaborating a list of empirically supported, manualized psychological interventions for specific disorders on the basis of randomized, controlled studies that pass for methodological rigor equivalent to that of pharmacological research (Chambless & Hollon, 1998; Task Force on Promotion and Dissemination of Psychological Procedures, 1995). In 1995, the APA Division 12 Task Force on Promotion and Dissemination of Psychological Procedures published criteria for identifying 92 IWAKABE empirically validated treatments (subsequently relabeled “empirically supported treatments”) for particular disorders. The initial list of ESTs consisted of 18 well- established treatments. Currently, there are over 200 treatments that have been admitted to this list. Although promoting the use of empirically supported interventions by clinicians seems reasonable and even desirable considering that the principles of psychotherapy should be based on scientific data, the matter was far more complex than the seemingly simple ideal of scientific psychotherapies. A number of debates and controversies were triggered (Elliott, 1998; Norcross et al., 2005). On the one hand, there were enthusiastic responses that empirically supported psychological treatments would become the basis for the public health system and provide a standard for graduate training in psychology. On the other hand, many psychologists, both researchers and practitioners, raised serious concerns about the decision rules requiring treatment manuals, focus on specific disorders, and the use of randomization (Wampold, 2001), all of which privileged brief, manualized treatments mostly of a cognitive-behavioral orientation. Indeed, 60 to 90% of the ESTs on the list are cognitive-behavioral treatments. None of these research decisions reflect psychotherapy as practiced in real clinical settings (Addis, Wade, & Hatgis, 1999; Messer, 2004). Moreover, the inclusion and exclusion criteria used in RCTs are so stringent that only a handful of patients are admitted to a study. For example, for RCTs for depression, less than 30% of patients were accepted to the study, drastically reducing the external validity of these studies (Westen & Morrison, 2001; Westen, Novotny, & Thompson- Brenne, 2004). In addition, patients with comorbid disorders are excluded from RCTs. It was estimated that ESTs deal with only 16% of the actual population that clinicians deal with in the real world (Westen et al., 2004). Another area of criticism was directed at the dissemination of a list of ESTs. Some argue that the treatments that are found to be efficacious in experimental conditions may not necessarily be effective in clinical settings (Chorpita et al., 2002). The list may also be misused: treatments that are not on the list may be perceived as invalid or ineffective (Canstelnuovo, 2010; Morrison, Bradley, & Westen, 2003). Furthermore, the list is more politically motivated than scientifically based: managed care companies may choose to reimburse the shorter treatments on the list, while discrediting longer and more costly treatments that are both necessary and more effective in the long run (Bohart et al., 1998; Elliott, 1998; Henry, 1998; Wampold, 2001). Controversies still continue. Humanistic and psychodynamic therapists, as well as practitioners in private settings as a group, are strongly opposed to this model of evidence (e.g., Bohart et al., 1998; Karon, 1995; Messer, 2004; Tavris, 2003). Alternative models have been proposed in response, which we will turn to next.

Empirically Supported Psychotherapy Relationships The division of Psychotherapy Task Force of the American Psychological Association, with the initiative of John Norcross and Michael Lambert, took a different approach to establishing evidence by examining a wider range of process and outcome studies, and on evidence of variables associated with therapeutic relationship MODELS OF EVIDENCE 93

(Norcross, 2002). They argue that by limiting the scope of empirical evidence to RCTs, ESTs did not take into consideration the vast amount of research findings from psychotherapy process and outcome studies. Many of these studies were conducted in actual clinical settings; as a result, while their internal validity is limited compared to that of RCTs, their ecological validity is still warranted (Lambert & Barley, 2002). In addition, although there is variability in the level of methodological control, these studies and their meta-analyses consistently point to similar conclusions: the therapeutic relationship accounts for approximately 10% of the total variance of outcome, the therapist for 8%, while the specific treatment method accounts for 5% to 8% (Lambert & Barley, 2002). Beyond client factors (including the severity of distress) that account for approximately 25% to 30% of the total variance, the therapeutic relationship with the effect of the therapist was found to contribute most to client change. The Task Force included a wide variety of studies that examined the relationship element in psychotherapy outcome, from quantitative correlational studies to rigorous qualitative studies. Outcome was broadly defined from post-session changes to distal overall treatment outcomes. The Task Force established a number of methodological criteria to evaluate evidential values of each study: the number of supportive studies, the consistency of the research results, the magnitude of the positive relationship between the element and outcome, the directness of the link between the element and outcome, the experimental rigor of the studies, and the external validity of the research base. The Division 29 Task Force had dual aims. The first aim was to identify those effective relationship behaviors primarily provided by the psychotherapist. Five elements—the working alliance, cohesion in group therapy, empathy, goal consensus, and collaboration—are judged to be demonstrably effective relationship elements. Seven relational elements that are promising and probably effective are: positive regard, congruence, feedback, repairing alliance ruptures, self-disclosure, management of , and the quality (but not quantity) of relational interpretations (Norcross & Wampold, 2011). The second aim of the Task Force was to identify those patient behaviors or qualities that serve as reliable markers for customizing the therapy relationship. The focus here was on what works for particular clients. This is an attempt to integrate into their model of evidence client factors that take up the largest share of outcome variances. The Task Force members reviewed the research evidence for adapting the therapy relationship to client characteristics. Two adaptations are judged to be demonstrably effective: one is matching therapeutic directiveness to level of resistance, and the other is providing lengthier, more intensive intervention to address higher functional impairment. Adjusting relationships to clients’ stage of change as well as coping styles are judged to be probably effective. Finally, promising and probably effective are: client expectations and attachment styles. One of the main strengths of empirically supported psychotherapy relationships is that it draws evidence from a wider variety of psychotherapy research and is congruent with the results of meta-analyses. Its focus on the psychotherapy relationship allows dialogue and collaboration among researchers and clinicians with different theoretical 94 IWAKABE approaches, as therapeutic relationship is a trans-theoretical concept. It also liberates psychotherapy from a narrow and limiting definition implicit in ESTs, which implicitly delimits psychotherapy as short-term manualized interventions to ameliorate symptoms of Axis I disorders. There are some challenges in this model of evidence. The first concern is that many studies examining the connection between the relationship element and treatment outcome are correlational in nature; therefore, causal inferences are difficult to make and are only speculative. Second, the model does not pay sufficient attention to the treatment-specific and disorder-specific nature of the therapeutic relationship: for example, meta-analyses on the relationship between empathy and outcome showed that empathy might be more important to outcome in cognitive-behavioral therapies than in others (Elliott, Bohart, Watson, & Greenberg, 2011). In sum, empirically supported psychotherapy relationships are based on large-scale meta-analytic studies and provide trans-theoretical guidelines as to the roles of therapeutic relationship in psychotherapy.

Research Informed Principles of Therapeutic Change Castonguay and Beutler (2005a) led a Task Force sponsored by the North American Society for Psychotherapy Research (NASPR), and the Division of Clinical Psychology of the American Psychological Association (APA, Division 12), which was designed to counterbalance and integrate the above two views of evidence—one focusing on the treatment and the other focusing on therapeutic relationship. Their focus was on principles of change operative across different treatments. The idea of principles of change was originally put forth by Goldfried (1980), who pointed out that many of the specific theory-driven procedures and interventions assumed to be responsible for the effectiveness of a particular orientation are best viewed as specific manifestations of underlying common mechanisms of change. For example, teaching client cognitive control over their emotions in cognitive behavioral therapy and therapist’s empathic response to client emotional pain are both means of enhancing client emotional regulation, though the theoretical constructs, as well as the terminologies that describe these processes, might be markedly different. Seeking similarities in different approaches at the most concrete level (involving technical procedures and therapist responses) or at the most abstract level (involving theories of personality functioning) can be difficult and often fruitless. However this intermediate level, characterized by general strategies or principles of change, allows a more clinically relevant and conceptually meaningful convergence among different approaches, as clinicians often work with clients with this level of therapeutic goal in mind. Instead of applying evidential criteria to determine the quality of research designs, Castonguay and Beutler (2005b) used a consensual approach to arrive at research- informed principles of change. The members of the Task Force worked in pairs, most of which were comprised of researchers of different theoretical orientations, so that therapeutic principles of change were not tied to particular theoretical approach, but instead represented more general and common ground. They reviewed a wide variety of empirical literature and extracted general principles of change. Principles of change were sought for four clusters of clinical problems that are frequently encountered in clinical MODELS OF EVIDENCE 95 practice: dysphoric, anxiety, personality, and substance use disorders. In addition, relevant findings on participant factors and relationship factors associated with the above four clinical clusters were also examined. As a result, 61 research informed principles of therapeutic change were identified. For example, Follette and Greenberg (2005) derived six principles in the treatment of depression, including: the challenging of cognitive appraisals, increasing positive in the client’s life, improving client’s interpersonal functioning and social environment, fostering emotional awareness, acceptance, and regulation. Finally, the members were asked to identify specific principles that had not been duplicated across disorders within the four clusters. The model of evidence based on principles of change uses experts’ judgment to identify principles of change. Therefore, the selection of experts and their consensual process are crucial. Second, the principles of change model do not pressure clinicians to abandon their current approaches to take up new approaches. Instead, the principles alert clinicians to potential areas of importance for particular problems. This is a more realistic alternative to having to learn over 100 different manuals for each disorder. There are still a limited number of research findings regarding personality disorders, and the interaction between relationship and client variables needs to be examined in the future. However, Castonguay and Beutler’s principles of change presents a model of evidence that provides common ground on which clinicians and researchers of different theoretical orientations can communicate and collaborate, rather than having each group competing with different brand name models for effectiveness.

Evidence-Based Practice in Psychology (EBPP) Another model of evidence comes from Evidence-based practice in psychology (EBPP), which was put forth by the APA Presidential Task Force appointed by the 2005 APA President, Ronald Levant. EBPP is defined as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preference” (APA Presidential Task Force on Evidence-Based Practice, 2006, p. 273). The definition of EBPP closely parallels the definition of evidence-based practice by the Institute of Medicine (2001), which was originally put forth by Sackett and his colleagues (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996), extending the examination of clinical expertise and broadening the consideration of patient characteristics. EBPP focuses on the practice guidelines concerning professional and clinical conduct rather than on specific treatment guidelines offered to patients, as in ESTs (APA Presidential Task Force on Evidence-Based Practice, 2006). While ESTs start with a treatment and ask whether a particular treatment is effective for a certain disorder under specific conditions relying on the findings from RCTs, EBPP starts with the patient and asks what research evidence (including but not limited to relevant results from RCTs) will assist the psychologist in achieving the best outcome. EBPP emphasizes a decision making process for integrating multiple sources of research findings into the intervention (Levant & Hasan, 2008). Instead of limiting the scope of research designs to RCTs, EBPP defines best research evidence as “scientific results related to intervention strategies, assessment, 96 IWAKABE clinical problems, and patient populations in laboratory and field settings as well as to clinically relevant results of basic research in psychology and related fields” (APA Presidential Task Force on Evidence-Based Practice, 2006, p. 274). It is acknowledged that different research designs are better suited to address different types of questions. Exploratory clinical observation is recognized as a valuable source of innovation and hypothesis building. Qualitative research, systematic case studies, and ethnographic research are all recognized for their unique contributions. Evaluation of the evidentiary value of research is guided by two dimensions that are in agreement with the APA policy for the Criteria for Evaluating Treatment Guidelines (American Psychological Association, 2002). The first dimension is treatment efficacy, which is the systematic and scientific evaluation of whether a treatment works. The second dimension is clinical utility: the applicability, feasibility, and usefulness of the intervention in specific clinical settings where it is to be offered. EBPP does not prescribe or endorse particular types of research design. Instead it requires that psychologists recognize the strengths and limitations of evidence obtained from different types of research, that they examine all components of practice from treatment method, psychologist/therapist, and client factors to the treatment relationship, and that they use their best clinical judgment and knowledge to develop coherent treatment strategies (APA Presidential Task Force on Evidence-Based Practice, 2006, p. 275). Finally, clinical expertise is essential for identifying and integrating best research evidence with clinical data (e.g., information about the patient) in the context of the patient’s characteristics and preferences, to deliver services that have the highest probability of achieving the goals of therapy (p. 275). Therefore it is not a direct application of manualized treatment or empirical findings that is important, but the careful examination of research evidence to best serve a particular individual based on clinical decision making. In the EBPP model, research evidence is one of three essential components of evidence-based practice, along with clinical expertise and patient characteristics, culture, and preferences. Empirical findings are not limited to treatment, but also include psychological assessment, case formulation, therapeutic relationship, and interventions. EBPP requires an appreciation of the value of multiple sources of scientific evidence. The role of the treating psychologist is to determine the applicability of research conclusions to a particular patient based on clinical inferences. Therefore, ongoing systematic monitoring of patient progress is essential to treatment adjustment.

Comparisons These four models of evidence are in great contrast to one another. ESTs are about a whole treatment, and they take a single-theory approach that is both short-term and manualized, whereas empirically supported psychotherapy relationships and research informed principles of therapeutic change are about effective components of treatment; they take an integrative stance, in that these findings are applicable across different theoretical orientations. They also accommodate research findings not only from short- term manualized therapies but also from psychotherapies conducted in actual clinical settings in various treatment frameworks. Finally, EBPP does not specify the effective MODELS OF EVIDENCE 97 components or treatment, but instead provides practice guidelines for evaluating research findings for use in clinical practice. EBPP requires therapists to carefully identify and integrate the best research evidence with clinical data in the context of the patient’s characteristics and preferences. Clinical expertise is essential. The determination of whether findings constitute evidence also differs in the four models. ESTs approve only highly controlled experimental studies such as RCTs and single-case design experiments. Empirically supported psychotherapy relationships admit a wide variety of process-outcome studies and their meta-analyses. Evidential criteria associated with effect size and correlation coefficient are valued. Research informed principles of therapeutic change use the consensus of two experts who review a variety of empirical studies, rather than relying on specific statistical indexes in determining common core change principles. Finally, EBPP does not prescribe what research to pay attention to because in it research evidence always needs to be examined and chosen for a particular individual. The emphasis is on the use of clinical expertise to weave scientific findings into practice. EBPP takes a position of methodological pluralism to recognize the unique roles each research methodology plays in illuminating different aspects of psychotherapy and change (Slife & Gantt, 1999). These four models can be regarded not as competing but complementary: the most restrictive model of evidence (ESTs) might be the most important in the context of treating specific disorders in medical settings, whereas empirically supported psychotherapy relationships and change principles provide a more general and integrative platform where researchers of different theoretical orientations can collaborate and build a common structure for the scientific practice of psychotherapy. Finally, EBPP offers more concrete guidelines to translate these findings for working with particular patients. Although the four models cover a wide range of studies, there are other important roles for scientific research in psychotherapy. For example, we need to explore the uncharted territory of psychotherapy: clinicians are always experimenting with new techniques and interventions in their practice, even without systematic research. Indeed, Stiles (2005) stated that most psychotherapy approaches were developed not in laboratories but from clinical observations and experiences with innovative interventions, and were used long before their efficacy was demonstrated in controlled and systematic empirical research. Even for , principles and intervention strategies were developed on the basis of close observations of individual cases (Edwards, Dattilio, & Bromley, 2004). In clinical practice, practitioners sometimes encounter new types of psychological problems and cases for which existing theories and techniques might turn out to be ineffective (Goldfried, 2000). Practitioners often creatively integrate interventions from other approaches to cope with these situations. This can lead to the development of new conceptualizations and techniques. The above models of evidence are based on the summaries of accumulated knowledge in the field. Scientific research is as much exploratory as confirmatory (Greenberg, 1986). The role of discovery and exploration in scientific research is not sufficiently recognized in evidence models. Second, evidence models do not tell us what actual implementation of these interventions looks like for a particular case. When RCTs show that one treatment is more 98 IWAKABE efficacious than TAU (treatment as usual) or controls, it is a group difference that is found. The group mean represents an average response to the treatment, but it does not tell us who responded well and who did relatively poorly. The differential responses might be associated with a number of factors, such as client personality and other characteristics, external events, and certain therapeutic processes (Kiesler, 1966). RCTs alone cannot get to these aspects of differential responses, which are clinically relevant and crucial in disseminating ESTs. Similarly, the results of meta-analyses provide information as to how much a particular variable relates to outcome in general, but not how these come into play in specific clinical situations and dyads. Finally, RCTs tell us whether the treatment worked, but not how it worked. The question of actual mechanisms of change is still unanswered (Greenberg & Watson, 2006; Kazdin, 2008). In empirically supported psychotherapy relationships, the importance of empathy, the alliance, and other qualities are supported. However, we still need to know how these can be established to produce positive outcomes. In sum, the four models of evidence are competing, with each proposing a definition of evidence, as well as clinical practice based on such evidence. They are also complementary, in that each view sheds light on different aspects of psychotherapy; each presents a unique way of integrating and synthesizing research findings. The tension between different views may be a natural and inescapable one arising from the lengthy schism between research and practice in psychotherapy, as well as divergent views of science held in different theoretical approaches (Kazdin, 2008; Messer, 2004; Reed, Kihlstrom, & Messer, 2006). Although these four models of evidence encompass a wide range of research, there are important areas that are not covered by them. In order to build an optimal research base in psychotherapy and to establish collaboration between researchers and practitioners, we still need to explore other types of research strategies.

Corroborating Research Strategies There are roughly three major areas or themes of research that are important in supplementing the above four models of evidence and furthering the knowledge base of psychotherapy. They are research that: (a) uncovers the mechanism of change process in and out of psychotherapy; (b) examines specific cases in order to examine the process and outcome of evidence-based practice in detail; and (c) explores and establishes the collaboration of researchers and practitioners to transform the traditional role division between them.

Mechanisms of Change One of the most important areas of psychotherapy research focuses on mechanisms of change (Elliott, 2010; Greenberg, 1986; Kazdin, 2008). An RCT comparing one treatment against a control group can establish a causal relation between an intervention and outcome. However, establishing causality does not explain how the change occurred, and it does not necessarily follow that the components that are hypothesized to be effective are the actual agents of change. Meta-analytic and outcome studies repeatedly demonstrate that it is common factors that are more effective than treatment-specific MODELS OF EVIDENCE 99 factors (e.g., Elkin, 1994; Luborsky, Singer, & Luborsky, 1975; Wampold, 2001). In addition, studies have shown that phenomena such as early session gains (Ilardi & Craighead, 1994) and sudden gains (Busch, Kanter, Landes, & Kohlenberg, 2006; Morgan, Roberts, & Ciesla, 2005), that indicate mechanisms of change other than the ones that were hypothesized by the theory, were at work. Finally, it is important to examine the change of mechanisms as viewed by clients (Elliott, 1984; Rennie, 1994a). Discovery oriented research: Mahrer proposes discovery-oriented psychotherapy research whose goal is to identify impressive, significant, and/or valued change events, and how such change events can be brought about using a group of therapists (Mahrer, 1988; Mahrer & Boulet, 1999). Mahrer (1988) criticizes conventional research in which theoretical propositions of psychotherapy models are tested for not being fruitful, as theoretical constructs are not clearly defined and findings from such research rarely leads to the revision of aspects of theory, or to changes in the way therapists practice that therapy. Instead of starting research by operationalizing and testing components of existing theories, Mahrer (1988) proposes that we start from identifying and describing change events from the actual practice of clinicians that are new, surprising, and extraordinary; instead of taking a particular theoretical position in naming and describing these events, we use jargon-free language to stay at the level of simple and concrete description. Discovery-oriented research by Mahrer starts from forming a research group consisting of 8-14 interested and enthusiastic therapists and researchers of various theoretical orientations who meet regularly after completing weekly assignments (Mahrer & Boulet, 1999). The group gathers psychotherapy session tapes (their own and others) in which clinicians felt something important, valuable, or unusual happened. The group members review these sessions and after agreeing that impressive changes actually happened in the session, they each develop a narrative description of the change event and also identify both the therapist and the client contribution to the change. Finally, the group arrives at how the therapist can identify a sequence of changes and how to use the impressive change. The process of discovery-oriented research is similar to that of group supervision and clinical case discussion. It takes an open format in which clinicians can use their clinical intuition in observing what happens in therapy while checking with group members whether sufficient evidence supports their observation and hypotheses. The focus on ‘surprising’ change events also helps clinicians go beyond their own theoretical perspective to examine the nature of these change events. Truthfulness of findings is warranted partly by the group consensus. The applicability is supported by listing other instances of similar change events. Mahrer’s discovery oriented research methodology has not been used as much as consensual qualitative research by Hill (Hill, Thompson, & Williams, 1997) and Comprehensive Process Analysis by Elliott (Elliott et al., 1994), which provide a more stringent and concrete guide in data gathering and analytic procedures conforming to the current qualitative research guidelines for trustworthiness and dependability. However, it still represents a unique and flexible phenomenological approach to uncovering mechanisms of change that makes the best use of clinical 100 IWAKABE observations and intuition in research. Client Perspective on the Change Process: One of the ways in which we can expand our understanding of mechanisms of change is to tap into clients’ subjective views. Clients are not passive recipients of therapist’s interventions but rather are active participants who reflect on the process of therapy and interpret therapists’ actions sometimes differently than intended (Bohart & Tallman, 1999; Rennie, 1994a, 1994b). The therapist needs to attend to what the client’s needs and the client’s theory of change, which refers to the perceptions and views that the client has about the nature of the problem he or she brings to therapy and its possible resolution (Duncan, Hubble, & Miller, 1997). Bedi and his colleagues showed that clients valued therapists’ personal characteristics, such as good grooming and the physical environment of therapy office, in trusting their therapist and building therapeutic relationship (Bedi, Davis, & Williams, 2005). Another study showed that client and therapist perspectives on the working alliance do not necessarily correspond well, with the therapist’s evaluation having a lower correlation with post-session outcomes (Fitzpatrick, Stalikas, & Iwakabe, 2005). In addition, client perspective is crucial in understanding phenomena such as premature terminations, therapeutic failures, and alliance ruptures. Clients tend to keep negative thoughts about their therapists to themselves (Rennie, 1994b). Hill and her colleagues showed that therapists were often unaware of the negative internal reactions of their clients, and this tendency was observed even in long-term psychotherapy with experienced therapists (Hill, Thompson, Cogar, & Denman, 1993; Hill, Thompson, & Corbett, 1992). There have been studies that examined helpful events and factors from clients’ perspectives using questionnaires and qualitative interviews administered after sessions (Elliott, 2010). Timulak (2007), for example, conducted a meta-synthetic study on client- identified helpful events and found that awareness, behavioral change, empowerment, and relief/experiential relaxation were the general categories of helpful events observed in 7 studies taken from 94 different cases. “Asking the client” is intuitively appealing. The integration of helpful event studies into outcome research such as RCTs will shed light on the change process within particular treatment approaches (Elliott, 2010). Focusing on the client’s experience of therapy also helps researchers overcome the problems of theoretically loarded language in psychotherapy (Goldfried, 2005): clients use everyday language to describe their experience, not theoretical language biased toward certain systems of thinking. Client life in and out of therapy: Most psychotherapy research presumes that clients’ changes occur during sessions, and that their everyday life is the area where these changes are implemented. However, some studies have revealed that clients’ changes occur even before they arrive in therapy (Allgood, Parham, Salts, & Smith, 1995; Howard, Kopta, Krause, & Orlinsky, 1986; Lawson, 1994). It is possible that what clients do in their everyday lives influences and interacts with what happens in therapy to generate and consolidate changes. In order to fully understand the complex ways in which a client’s life outside the consulting room interacts with therapeutic work to produce an overall MODELS OF EVIDENCE 101 therapeutic change, it is necessary to extend the area of investigation to include the client’s life outside therapy and relate this to what happens during sessions. Dreier (2008) views the therapy session as just one of the many activities in clients’ lives and has studied clients both during sessions and in their everyday life context. In his research, Dreier interviewed family members about aspects of their everyday life and demonstrated how clients take independent actions while they are in therapy to improve their lives. The interview strategy used by Dreier was time-consuming and demanding for clients, as it required them to regularly come in for a couple of hours for interviews. Mackrill (2007, 2008, 2011) developed a cross-contextual diary method in which clients are asked to write about their experience in and out of therapy in order to examine the relationship between what happens in therapy and everyday life. Mackrill provided a list of specific instructions and probed for both clients and therapists regarding how to use their diary. They both recorded what they found significant in sessions and their thoughts about why they found particular aspects of sessions significant and helpful. Clients were also asked to present details about their lives outside sessions, to describe new and different aspects of their lives, and finally to explain the possible connection between these new and different experiences and what occurred in therapy sessions. The concept of client triangulation was generated, which means that clients compared knowledge from various sources including therapy and sources outside therapy to become more secure in their beliefs. Mackrill (2007) concluded that the therapist was far from the only source of information that clients used to improve their life situations; instead, clients compared and contrasted alternative sources of information to validate their personal stances and the stances of others. This cross-contextual diary research enables researchers to view aspects of therapeutic change process that are often overlooked in other research designs that focus exclusively on what happens within therapy hours. It provides a holistic view of client change in the context of the client’s everyday life. Meta-analytic studies show that client factors account for the largest variance in outcome (Lambert & Barley, 2002). Research that examines client-related factors in and out of therapy will be beneficial to all of four models of evidence.

Systematic Case Studies The most basic and intuitive unit in psychotherapy practice is the “case” (Edwards, 2007; Eells, 2007): theoretical ideas and concepts are best explained and understood in the context of a particular client, and therapists think of psychotherapy in terms of each client. Indeed, supervision, which is essential to psychotherapy training, is case based clinical learning in which supervisees integrate their theoretical knowledge, conceptualization, and intervention skills through a particular case. On the other hand, the primary source of scientific findings in ESTs, as well as empirically supported psychotherapy relationships and research informed principles of therapeutic change are group comparison studies that, whether they are controlled clinical trials or correlational studies, extrapolate and analyze the mean scores obtained from a group of individuals who received particular treatments against those of control groups. Although finding out that a particular type of treatment is 102 IWAKABE more efficacious than another treatment or that particular process variables are positively correlated with outcome is important, these findings on average effect size obtained from group comparison do not guarantee how well one particular client will respond to that treatment or whether one particular individual will follow the hypothesized path of change. Westen et al. (2004) showed that approximately 30 to 40% of clients do not improve after receiving manualized treatments. It is more important for clinicians to know what types of clients do not respond well to the treatment and what signs and markers indicate stagnation or unproductive processes that may result in therapeutic failures. For these goals, systematic examination of selected cases with varying outcomes will be most helpful. Clinical case studies that are essentially therapists’ narrative accounts of what happened during the treatment along with interpretations based on the therapeutic work with their own clients have been criticized for their lack of systematic and reliable methods (Flyvbjerg, 2006; Messer, 2007; Midgley, 2006; Spence, 2001). More recently, important methodological advances have been made to rectify these problems and to develop a variety of systematic case study research methods (e.g., Elliott, 2002; Hill et al., 2011; McLeod, 2010; Silberschatz & Curtis, 1993; Stiles, 2007). First, in systematic case studies, the narrative description of therapeutic process is grounded on both quantitative and qualitative data obtained from multiple sources, such as questionnaires, therapist and observer ratings, and participant interviews (McLeod & Elliott, 2011). These multiple sources are then triangulated in order to assess the degree of data convergence and ground (Fishman, 1999, 2005). Second, a research team, rather than a single clinician who acts as the therapist in the case, is involved in the process of data analysis (McLeod, 2010). The research team compares alternative interpretations of case materials until it reaches consensus regarding the conclusions (Elliott, 2002; Hill et al., 2011; McLeod, 2002). Systematic case studies can no longer be dismissed as anecdotal session reports biased by the therapist’s subjective and theoretical perspective. Rather, they are methodologically sound, rigorous, and systematic, representing a form of mixed methods research (Dattilio, Edwards, & Fishman, 2010). Single-case design experiments, often referred to by different terms such as intra- subject replication designs, or N = 1 subject experiments, are rigorous methods for testing hypotheses about treatment effects (e.g., Barlow & Hersen, 1984). They are recognized as an alternative to RCTs in empirically supported treatments (Task Force on Promotion and Dissemination of Psychological Procedures, 1995). The aim of single-case design experiments is to record and assess specific changes observed in clients that are attributable to the administration of specific interventions. A standard test or behavior assessment is conducted regularly, and changes are compared with a baseline of target behaviors and other physiological indexes obtained before introducing the treatment (Kazdin, 1982). Single-subject experimental designs are most effective when changes can be assessed in terms of immediate measurable client responses, such as an increase or decrease in frequencies and rates of behaviors that occur immediately after specific interventions of interest; however, more complex patterned changes that require more time to emerge may not fit this design (Safran, Greenberg, & Rice, 1988). A direct causal MODELS OF EVIDENCE 103 relationship between an intervention and the client change is often assumed; as a result, background factors such as client history, the nature of therapist-client interaction, and other life events outside therapy, although reported, are not fully integrated in the analysis of change. On the other hand, systematic case studies use both quantitative and qualitative data to examine changes at multiple levels, focusing on both the outcome and process of therapy. There are three major areas of research in which systematic case studies play significant roles. One is filling the research-practice gap in psychotherapy (McLeod, 2002), and providing the most clinically relevant findings for practitioners. Systematic case studies are, by nature, practice oriented, and they represent a bottom-up approach to promoting evidence-based practice (Fishman, 2005; Peterson, 2004). Second, systematic case studies connect the results of RCTs and other outcome studies to clinical practice by describing in detail what the implementation of a particular manualized treatment looks like (Dattilio et al., 2010). Detailed documentation of single cases is also important for disorders and problems for which treatment efficacy has not yet been demonstrated (Beutler et al., 2004). Outcome studies often take too long to conduct and may require resources beyond the reach of most clinicians. Childhood depression, for example, is a relatively new psychological disorder with limited outcome studies (Stegall & Nangle, 2005). Single-case studies that describe the successful course of treatment and specific interventions that were effectively implemented would be clinically useful before the results of large-scale outcome studies are disseminated. Reporting of poor outcome cases also provides crucial information for preventing therapeutic failures (Iwakabe & Gazzola, 2009). Similarly, new types of interventions can be illustrated using systematic case studies. These innovative techniques and interventions are of particular interest to many clinicians who are eager to learn and expand their clinical repertoires to effectively work with their clients. When new interventions are introduced in clinical literature they are often described using short case vignettes that illustrate certain clinical concepts and techniques. However, a more complete view of the course of treatment that sets the context for evaluating the role of particular interventions objectively and thoroughly is rarely provided. Some of these interventions are experimented with and applied without sufficient information for treatment consideration. Other times, clinicians may avoid incorporating these interventions in order to prevent unforeseeable risks. Systematic case studies can be used for a more complete examination of new interventions and techniques. Systematic case studies are particularly suited for studying long-term therapies (e.g., Greenwood, Leach, Lucock, & Noble, 2011; Mayotte-Blum et al., 2012). In private practice, therapists often work with clients over an undetermined period of time. Many psychological problems (e.g., some personality disorders) are chronic in nature, requiring longer treatments that do not conform to the treatment guidelines specified in the short- term therapies generally studied in outcome studies. Finally, systematic case studies also allow for closer examination of integrative and eclectic therapies. Goldfried and Wolfe (1998) argue that clinical practice is so strongly influenced by the concept of integration that researching a pure or single theoretical approach covers only a small portion of what 104 IWAKABE is practiced by clinicians. In sum, systematic case studies allows an examination of a wide range of psychotherapy practiced in clinical settings and help bridging outcome research to clinical practice.

Exploring the Researcher-Practitioner Relationship One of the problems in psychotherapy has been the schism between research and practice (Soldz & McCullough, 2000; Talley et al., 1993). It has been argued that not a few full-time practitioners underutilize empirical research findings in their practice (Morrow-Bradley, & Elliott, 1986). Although a shift in a positive direction has been observed (Safran et al., 2011) with the advancement of the evidence movement, many studies still fail to address the concerns and questions that clinicians encounter in their everyday practice (Beutler, Williams, Wakefield, & Entwistle, 1995; Goldfried & Wolfe, 1998). Furthermore, practitioners are often considered to be the users or consumers of research findings, but their opinions and their need for particular research have not been reflected in the topic or design of studies, which are conducted mostly by academicians who may have a limited view of issues relevant to clinical practice in a variety of settings. In order to rectify the schism between research and practice, it is necessary to create a new relationship between researchers and practitioners, one that is not a uni-directional flow of information from researchers to practitioners supplying laboratory-based findings to clinical practice, but one in which practitioners’ concerns are incorporated in research questions, designs, and implementation so that their practice becomes the source that generates clinically relevant data. The Pennsylvania Psychological Association Practice Research Networks: Castonguay and his colleagues developed the Pennsylvania Psychological Association Practice Research Networks (PPA-PRNs), which promote an active collaboration between researchers and clinicians in developing scientifically rigorous and clinically relevant psychotherapy research. It explores the researcher-practitioner partnership by involving practitioners at all levels of research, from selecting topics of investigation, building hypotheses, constructing research design, and disseminating the findings, with the leadership of both a full-time academician and a full-time clinician (Borkovec, Echemendia, Ragusea, & Ruiz, 2001). In one of their studies (Castonguay et al., 2010), which focused on clients’ identified helpful and hindering events during sessions to help therapists better address their clients’ needs, 13 clinicians of varying theoretical orientations participated in the design and implementation of the study. For a period of 18 months, psychotherapists participating in this project invited all of their new clients (including adolescents and children) to participate in this study, except those clinically contra-indicated. As a result, 146 clients contributed over 1600 helpful or hindering events, which were analyzed by three independent judges using a category system. The results indicated that both clients and therapists perceived the fostering of client’s self-awareness as helpful. Findings also highlighted the importance of paying attention to the therapeutic alliance and other interpersonal relationships. Follow-up qualitative interviews that were conducted with participating therapists revealed several benefits of research participation, such as learning MODELS OF EVIDENCE 105 information that improved their work, a sense of contributing to research and to other practitioners. Difficulties of integrating research protocol into routine clinical practice were also reported. The PPA-PRNs attempt to intertwine research activities with daily clinical tasks: Clinicians day-to-day practice becomes the site of collecting data, and their questions about what contributes to change becomes the central research question in their research. Research and practice are “seamlessly integrated” (Wachtel, 1991). This is a creative attempt to overcome the division between research and practice and researcher and practitioner. Finally, this research-practice integration model is extended to doctoral training, in which students are trained with certain interventions and collect the data from their clients for dissertation studies (Boswell, McAleavey, Castonguay, Hayes, & Locke, 2012; McAleavey et al., 2012). The project is moving toward creating infrastructures that allow experienced clinicians, trainees and researchers to practice this type of integration of research and practice (Castonguay, 2011). Systematic Evaluation and prevention of treatment failure: Evidence based practice is not simply about applying empirically supported treatment models or empirically based knowledge in practice, but also about introducing systematic, objective measures to monitor the client’s functioning and progress to improve the quality of therapy and prevent treatment failures. The importance of careful and regular monitoring of patients’ progress has been noted by many, and several measures have been developed that are relatively easy to administer and that provide a general view of client functioning, such as The Clinical Outcomes in Routine Evaluation (CORE-OM: Evans et al., 2000); The Shorter Psychotherapy and Counseling Evaluation measure (sPaCE; Halstead, Leach, & Rust, 2007), and the Outcome-Questionnaire-45 (OQ-45: Lambert, Hansen, & Finch, 2001). These measures have been used widely in research and practice. What is particularly notable is the way the OQ-45 is used to provide feedback to clinicians for clinical decision-making and session planning (Lambert, 2007). The measure has been applied to over 10,000 psychotherapy clients and the data has accumulated to identify the patterns of change in clients who improved, as well as those who deteriorated (Lambert, Harmon, Slade, Whipple, & Hawkins, 2005). The OQ-45 is administrated before each session using a tablet PC that is handed to clients in the waiting room. Scoring is completed automatically by software. The result, which displays client symptoms, interpersonal relations, social functioning levels in relation to two norm groups and the overall functioning level plotted over the course of therapy, are immediately presented on the computer screen of the therapist. The therapist is alarmed when the client is at the risk of dropping out or deteriorating, and can refer to “clinical support tools” (CSTs: Lambert et al., 2004), which are research based clinical mini-manuals that provide information about how to repair alliance ruptures and to cope with other problems. Lambert (2007) reported that therapists, when given weekly feedback on client progress based on the OQ-45, not only had a higher number of clients who achieved clinically significant change, but also had half the number of clients who deteriorated. Lambert’s attempt to build a feedback system based on systematic evaluation represents one important strand of research that proposes how research findings can be 106 IWAKABE integrated into routine clinical practice. Its focus on client symptomatology is partly based on the meta-analytic findings that client factors account for the largest portion of outcome variances (Lambert & Barley, 2002). Clinicians use research based information for every session that they have, and they also contribute information to a larger body of the database in every session. The research process is embedded in routine care. It is a system of integrating research and practice at the session-to-session level.

CONCLUSIONS

Studies that examine mechanisms of change are relevant and beneficial to all four models of evidence. In particular, understanding of the findings of RCTs, which establish causality between treatment and outcome, will be strengthened. Although empirically supported psychotherapy relationships and research informed principles of therapeutic change concern effective ingredients of therapy, it remains to be understood how these effective ingredients work in specific treatment contexts. For example, studies examining how therapists of different orientations help clients enhance their emotional regulation capacities and which aspects of those regulation capacities (e.g., impulse control, tolerance to negative emotions, access to regulation strategies) are differently enhanced by these methods would further our understanding of change principles. Systematic case studies are particularly relevant for EBPP, which emphasizes the importance of adjusting interventions according to the patient’s individual characteristics. Findings from RCTs can also be translated into more practice-oriented information by illustrating the process of intervention as well as corresponding changes by systematic case studies (Dattilio et al., 2010). One of the problems that remain in the field is the division between researchers and practitioners. The scientist-practitioner model of training, which also informed the research-practice relationship in clinical psychology and psychotherapy, assumes that all psychologists are trained as both scientists and practitioners. Peterson (1995) points out that this dual role model does not equally endorse the importance of two roles. Psychologists are scientists first and practitioners second. In other words, this model of research-practice relationship assumes that there is a basic science that forms the foundation of practice. This is called “technical rationality”, which views professional activity consisting of instrumental problem solving made rigorous by the application of scientific theory and technique (Schön, 1983). Technical rationality, however, separates research from practice: “it sends the scientists, thinking positivistically, off in one direction, and the practitioners, thinking artistically, off in another, with neither well prepared for the challenges they will encounter” (Peterson, 1995, p. 977). According to Schön (1983), the engagement of the skilled professional can be described as “reflection- in-action.” Professional experts often work with problems that cannot be easily defined and do not lend themselves to simple predetermined solutions. Problems are defined and framed interactively. Thinking is not separated from doing, and experimentation is built into implementation. Systematic case studies represent this type of professional work and MODELS OF EVIDENCE 107 embody a different model of the research and practice relationship. The model of technical rationality coincides with ESTs, as practitioners are required to adhere to the treatment manual that is rule bound and prescriptive. On the other hand, EBPP emphasizes the importance of clinical expertise in adjusting one’s approach according to client characteristics. This is closer to a reflection in action model that emphasizes clinical judgment making in integrating scientific evidence and clinical knowledge in working with particular individuals. However, clinical judgment is a complex decision making process, which is not free from potential biases and flaws (Garb, 2005). The future research is needed to delineate factors differentiating successful and unsuccessful decision making processes in EBPP. The question of evidence in psychotherapy requires far more than demonstrating that certain treatments are efficacious. It requires acknowledging the strengths and limitations of different research methodologies and augmenting findings from one another. It also requires re-conceptualizing the research-practice relationship and promoting research that is not only relevant but also compelling to practitioners as a source of information.

REFERENCES

American Psychological Association Presidential Task Force on Evidence-Based Practice. 2006. Evidence- based practice in psychology. American Psychologist, 6, 271–285. Addis, M. E., Wade, W. A., & Hatgis, C. 1999. Barriers to dissemination of evidence-based practices: Addressing practitioners’ concerns about manual-based psychotherapies. Clinical Psychology: Science and Practice, 6, 430–441. Allgood, S. M., Parham, K. B., Salts, C. J., & Smith, T. A. 1995. The association between pretreatment change and unplanned termination in family therapy. The American Journal of Family Therapy, 23, 195–202. American Psychological Association. 1995. Template for developing guidelines: Interventions for mental disorders and psychosocial aspects of physical disorders. Washington, DC: American Psychological Association. American Psychological Association. 2002. Criteria for evaluating treatment guidelines. American Psychologist, 57, 1052–1059. Barlow, D. H., & Hersen, M. 1984. Single case experimental designs: Strategies for studying behavior. New York: Pergamon Press. Bedi, R. B., Davis, M. D., & Williams, M. 2005. Critical incidents in the formation of the therapeutic alliance from the client’s perspective. Psychotherapy, 42, 311–323. Beutler, L. E. 2009. Making science matter in clinical practice: Redefining psychotherapy. Clinical Psychology: Science and Practice, 16, 301–317. Beutler, L. E., Williams, R. E., Wakefield, P. J., & Entwistle, S. R. 1995. Bridging scientist and practitioner perspectives in clinical psychology. American Psychologist, 50, 984–994. Beutler, L. E., Malik, M., Alimohamed, S., Harwood, T. M., Talebi, H., Noble, S., et al. 2004. Therapist variables. In M. J. Lambert (Ed.), Bergin & Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 227–306). New York: Wiley. Bohart, A., O’Hara, M., & Leitner, L. M. 1998. Empirically violated treatments: Disenfranchisement of humanistic and other psychotherapies. Psychotherapy Research, 8, 141–157. Bohart, A. C., & Tallman, K. 1999. How clients make therapy work: The process of active self-healing. Washington, DC: American Psychological Association. Borkovec, T. D., Echemendia, R. J., Ragusea, S. A., & Ruiz, M. 2001. The Pennsylvania Practice Research 108 IWAKABE

Network and future possibilities for clinically meaningful and scientifically rigorous psychotherapy effectiveness research. Clinical Psychology: Science and Practice, 8, 155–167. Boswell, J. F., McAleavey, A. A., Castonguay, L. G., Hayes, J. A., & Locke, B. D. 2012. Previous mental health service utilization and change in clients’ depressive symptoms. Journal of Counseling Psychology, 59, 368–378. Busch, A. M., Kanter, J. W., Landes, S. J., & Kohlenberg, R. J. 2006. Sudden gains and outcome: A broader temporal analysis of cognitive therapy for depression. Behavior Therapy, 37, 61–68. Canstelnuovo, G. 2010. Empirically supported treatments in psychotherapy: Toward the evidence-based or evidence-biased psychology in clinical settings? Frontiers in Psychology, 27, 1–10. Castonguay, L. G. 2011. Psychotherapy, psychopathology, research and practice: Pathways of connections and integration. Psychotherapy Research, 21, 125–140. Castonguay, L. G., & Beutler, L. E. (Eds.). 2005a. Principles of therapeutic change that work. New York: Oxford University Press. Castonguay, L. G., & Beutler, L. E. 2005b. Common and unique principles of therapeutic change: What do we know and what do we need to know? In L. G. Castonguay & L. E. Beutler (Eds.), Principles of therapeutic change that work (pp. 353–369). New York: Oxford University Press. Castonguay, L. G., Boswell, J. F., Zack, S., Baker, S., Boutselis, M., Chiswick, N., et al. 2010. Helpful and hindering events in psychotherapy: A practice research network study. Psychotherapy, 47, 327–344. Chambless, D. L., Baker, M. J., Baucom, D. H., Beutler, L. E., Calhoun, K. S., Crits-Cristoph, P., et al. 1998. Update on empirically validated therapies, II. The Clinical Psychologist, 51, 3–16. Chambless, D. L., & Hollon, S. D. 1998. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7–18. Chambless, D. L., Sanderson, W. C., Shoham, V., Bennett Johnson, S., Pope, K. S., Crits-Cristoph, P., et al. 1996. An update on empirically validated therapies. The Clinical Psychologist, 49, 5–18. Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A., Amundsen, M. J., McGee, C., et al. 2002. Toward large-scale implementation of empirically supported treatments for children: A review and observations by the Hawaii empirical basis to services task force. Clinical Psychology: Science and Practice, 9, 165–190. Dattilio, F. M., Edwards, D. J. A., & Fishman, D. B. 2010. Case studies within a mixed methods paradigm: Towards a resolution of the alienation between researcher and practitioner in psychotherapy research. Psychotherapy, 47, 427–441. Dawes, R. M. 1994. House of cards: Psychology and psychotherapy built on myth. New York: Free Press. Department of Health. 2001. Treatment choice in psychological therapies and counseling: Evidence-based practice guidelines. London: Department of Health Publications. Dreier, O. 2008. Psychotherapy and everyday life. London: Cambridge University Press. Duncan, B. L., Hubble, M. A., & Miller, S. D. 1997. Psychotherapy of “impossible cases”. New York: Norton. Edwards, D. J. A. 2007. Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence-based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy, 3, 6–34. Edwards, D. J. A., Dattilio, F. M., & Bromley, D. B. 2004. Developing evidence-based practice: The role of case-based research. Professional Psychology: Research and Practice, 35, 589–597. Eells, T. D. 2007. Generating and generalizing knowledge about psychotherapy from pragmatic case studies. Pragmatic Case Studies in Psychotherapy, 3, 35–54. Retrieved January 13, 2009, from http:// pcsp.libraries.rutgers.edu/index.php/pcsp/article/view/893/2263 Elkin, I. 1994. The NIMH Treatment of Depression Collaborative Research Program: Where we began and where we are. In A. E. Bergin, & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 114–139). Oxford, England: John Wiley & Sons. Elliott, R. 1984. A discovery-oriented approach to significant events in psychotherapy: Interpersonal process recall and comprehensive process analysis. In L. Rice & L. Greenberg (Eds.), Patterns of change: Intensive analysis of psychotherapeutic process (pp. 249–286). New York: Guilford. Elliott, R. 1998. Editor’s introduction: A guide to the empirically supported treatments controversy. Psychotherapy Research, 8, 115–125. Elliott, R. 2002. Hermeneutic single-case efficacy design. Psychotherapy Research, 12, 1–21. MODELS OF EVIDENCE 109

Elliott, R. 2010. Psychotherapy change process research: Realizing the promise. Psychotherapy Research, 20, 123–135. Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. 2011. Empathy. Psychotherapy, 48, 43–49. Elliott, R., Shapiro, D. A., Firth-Cozens, J., Stiles, W. B., Hardy, G. E., Llewelyn, S. P., et al. 1994. Comprehensive process analysis of insight events in cognitive-behavioral and psychodynamic- interpersonal psychotherapies. Journal of Counseling Psychology, 41, 449–463. Evans, C., Mellor-Clark, J., Margison, F., Barkham, M., Audin, K., Connell, J., et al. 2000. CORE: Clinical Outcomes in Routine Evaluation. Journal of Mental Health, 9, 247–255. Fishman, D. B. 1999. The case for pragmatic psychology. New York: NYU Press. Fishman, D. B. 2005. Editor’s introduction to PCSP: From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy, 1, 1–50. Fitzpatrick, M., Stalikas, A., & Iwakabe, S. 2005. Perspective divergence in the working alliance. Psychotherapy Research, 15, 69–80. Flyvbjerg, B. 2006. Five misunderstandings about case-study research. Qualitative Inquiry, 12, 219–244. Follette, W. C., & Greenberg, L. S. 2005. Technique factors in treating dysphoric disorders. In L. G. Castonguay & L. E. Beutler (Eds.), Principles of therapeutic change that work (pp. 83–109). New York: Oxford University Press. Frazier, S. L., Formoso, D., Birman, D., & Atkins, M. C. 2008. Closing the research to practice gap: Redefining feasibility. Clinical Psychology: Science and Practice, 15, 125–129. Garb, H. N. 2005. Clinical judgment and decision making. Annual Review of Clinical Psychology, 1, 67–89. Goldfried, M. R. 1980. Toward the delineation of therapeutic change principles. American Psychologist, 35, 991–999. Goldfried, M. R. (Ed.). 2000. How therapists change: Personal and professional reflections. Washington, DC: American Psychological Association. Goldfried, M. R. 2005. Toward a common language for case formulation. Journal of Psychotherapy Integration, 5, 221–244. Goldfried, M. R., & Wolfe, B. E. 1998. Toward a more clinically valid approach to therapy research. Journal of Consulting and Clinical Psychology, 66, 143–150. Greenberg, L. 1986. Change process research. Journal of Consulting and Clinical Psychology, 54, 4–9. Greenberg, L. S. 1991. Research on the process of change. Psychotherapy Research, 1, 3–16. Greenberg, L. S., & Watson, J. C. 2006. Change process research. In J. C. Norcross, L. E. Beutler, & R. F. Levant (Eds.), Evidence-based practice in mental health: Debates and dialogue on the fundamental questions (pp. 81–89). Washington, DC: American Psychological Association. Greenwood, H., Leach, C., Lucock, M., & Noble, R. 2011. The process of long-term : A case study combining artwork and clinical outcome. Psychotherapy Research, 17, 588–599. Halstead, J. E., Leach, C., & Rust, J. 2007. The development of a brief measure for the evaluation of psychotherapy and counseling (sPaCE). Psychotherapy Research, 17, 656–672. Henry, W. 1998. Science, politics, and the politics of science: The use and misuse of empirically validated treatment research. Psychotherapy Research, 8, 126–140. Hill, C. E., Chui, H., Huang, T., Jackson, J., Liu, J., & Spangler, P. 2011. Hitting the wall: A case study of interpersonal changes psychotherapy. Counselling and Psychotherapy Research, 11, 34–42. Hill, C. E., Thompson, B. J., Cogar, M. C., & Denman, D. W. 1993. Beneath the surface of long-term therapy: Therapist and client report of their own and each other’s covert processes. Journal of Counseling Psychology, 40, 278–287. Hill, C. E., Thompson, B. J., & Corbett, M. M. 1992. The impact of therapist ability to perceive displayed and hidden client reactions on immediate outcome in first sessions of brief therapy. Psychotherapy Research, 2, 143–155. Hill, C. E., Thompson, B. J., & Williams, E. 1997. A guide to conducting qualitative research. The Counseling Psychologist, 25, 517–572. Howard, K. I., Kopta, S. M., Krause, M. S., & Orlinsky, D. E. 1986. The dose-effect relationship in psychotherapy. American Psychologist, 41, 159–164. Ilardi, S. S., & Craighead, W. E. 1994. The role of nonspecific factors in cognitive-behavior therapy for depression. Clinical Psychology: Science and Practice, 1, 138–156. Institute of Medicine. 2001. Crossing the quality chasm: A new health system for the 21st century. 110 IWAKABE

Washington, DC: National Academy Press. Iwakabe, S., & Gazzola, N. 2009. From single case studies to practice-based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19, 601–611. Karon, B. P. 1995. Provision of psychotherapy under managed care: A growing crisis and National nightmare. Professional Psychology: Research and Practice, 26, 5–9. Kazdin, A. E. 1982. Single-case research designs: Methods for clinical and applied settings. New York: Oxford University Press. Kazdin, A. E. 2008. Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63, 146– 159. Kiesler, D. J. 1966. Some myths of psychotherapy research and the search for a paradigm. Psychological Bulletin, 65, 110–136. Lambert, M. J. 2007. Presidential address: What we have learned from a decade of research aimed at improving psychotherapy outcome in routine care. Psychotherapy Research, 17, 1–14. Lambert, M. J., & Barley, D. E. 2002. Research summary on the therapeutic relationship and psychotherapy outcome. In J. C. Norcross (Ed.), Psychotherapy relationships that work: Therapist contributions and responsiveness to patients (pp. 17–32). New York: Oxford University Press. Lambert, M. J., Hansen, N. B., & Finch, A. E. 2001. Patient-focused research: Using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology, 69, 159–172. Lambert, M. J., Harmon, C., Slade, K., Whipple, J. L., & Hawkins, E. J. 2005. Providing feedback to psychotherapists on their patients’ progress: Clinical results and practice suggestions. Journal of Clinical Psychology, 61, 165–174. Lambert, M. J., & Ogles, B. M. 2004. The efficacy and effectiveness of psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (pp. 139–193). New York: Wiley. Lambert, M. J., Whipple, J. L., Harmon, C., Shimokawa, K., Slade, K., & Christofferson, C. 2004. Clinical support tools manual. Provo, UT: Department of Psychology, Brigham Young University. Lawson, D. 1994. Identifying pretreatment change. Journal of Counseling & Development, 72, 244–248. Levant, R. F., & Hasan, N. T. 2008. Evidence-based practice in psychology. Professional Psychology: Research and Practice, 39, 658–662. Lipsey, M. W., & Wilson, D. B. 1993. The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis. American Psychologist, 48, 1181–1209. Luborsky, L., Singer, J., & Luborsky, L. 1975. Comparative studies of psychotherapy. Archives of General Psychiatry, 32, 995–1008. Mackrill, T. 2007. Using a cross-contextual qualitative diary design to explore client experiences of psychotherapy. Counselling & Psychotherapy Research, 7, 233–239. Mackrill, T. 2008. Exploring psychotherapy clients’ independent strategies for change while in therapy. British Journal of Guidance and Counselling, 36, 441–453. Mackrill, T. 2011. A diary-based, cross-contextual case study methodology: Background for the case of “Jane and Joe”. Pragmatic Case Studies in Psychotherapy, 7, 156–186. Mahrer, A. R. 1988. Discovery-oriented psychotherapy research: Rationale, aims, and methods. American Psychologist, 43, 694–702. Mahrer, A. R., & Boulet, D. B. 1999. How to do discovery-oriented psychotherapy research. Journal of Clinical Psychology, 55, 1481–1493. Mayotte-Blum, J., Slavin-Mulford, J., Lehmann, M., Pesale, F., Becker-Matero, N., & Hilsenroth, M. 2012. Therapeutic immediacy across long-term psychodynamic psychotherapy: An evidence-based case study. Journal of Counseling Psychology, 59, 27–40. McAleavey, A. A., Nordberg, S. S., Hayes, J. A., Castonguay, L. G., Locke, B. D., & Lockard, A. J. 2012. Clinical validity of the Counseling Center Assessment of Psychological Symptoms-62 (CCAPS-62): Further evaluation and clinical applications. Journal of Counseling Psychology, 59, 575–590. McLeod, J. 2002. Case studies and practitioner research: Building knowledge through systematic inquiry into individual cases. Counselling and Psychotherapy Research, 2, 264–268. McLeod, J. 2010. Case study research in counselling and psychotherapy. London: Sage. McLeod, J., & Elliott, R. 2011. Systematic case study research: A practice-oriented introduction to building MODELS OF EVIDENCE 111

an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research: Linking Research with Practice, 11, 1–10. Messer, S. B. 2004. Evidence-based practice: Beyond empirically supported treatments. Professional Psychology: Research and Practice, 36, 580–588. Messer, S. B. 2007. Psychoanalytic case studies and the Pragmatic Case Study method. Pragmatic Case Studies in Psychotherapy, 3, 55–58. Midgley, N. 2006. The ‘inseparable bond between cure and research’: Clinical case study as a method of psychoanalytic inquiry. Journal of Child Psychotherapy, 32, 122–147. Morgan, A. R., Roberts, J. E., & Ciesla, J. A. 2005. Sudden gains in cognitive behavioral treatment for depression: When do they occur and do they matter? Behaviour Research and Therapy, 43, 703–714. Morrison, K., Bradley, R., Westen, D. 2003. The external validity of controlled clinical trials of psychotherapy for depression and anxiety: A naturalistic study. Psychology and Psychotherapy: Theory, Research and Practice, 76, 109–132. Morrow-Bradley, C., & Elliott, R. 1986. The utilization of psychotherapy research by practicing psychotherapists. American Psychologist, 41, 188–197. Norcross, J. C. (Ed.). 2002. Psychotherapy relationships that work: Therapist contributions and responsiveness to patient needs. New York: Oxford University Press. Norcross, J. C., Beutler, L. E., & Levant, R. F. (Eds.). 2005. Evidence-based practices in mental health: Debate and dialogue on the fundamental questions. Washington, DC: American Psychological Association. Norcross, J. C., & Wampold, B. E. 2011. Evidence-based therapy relationships: Research conclusions and clinical practices. Psychotherapy, 48, 98–102. Peterson, D. R. 1995. The reflective educator. American Psychologist, 50, 975–983. Peterson, D. R. 2004. Science, scientism, and professional responsibility. Clinical Psychology: Science and Practice, 11, 196–201. Reed, G. M., Kihlstrom, J. F., & Messer, S. B. 2006. What qualifies as evidence of effective practice? In J. C. Norcross, L. E. Beutler, & R. F. Levant (Eds.), Evidence-based practices in mental health: Debate and dialogue on the fundamental questions (pp. 13–55). Washington, DC: American Psychological Association. Rennie, D. L. 1994a. Storytelling in psychotherapy: The client’s subjective experience. Psychotherapy: Theory, Research, Practice, Training, 31, 234–243. Rennie, D. L. 1994b. Clients’ deference in psychotherapy. Journal of Counseling Psychology, 41, 427–437. Roth, A., & Fonagy, P. 2004. What works for whom? A critical review of psychotherapy research (2nd ed.). New York: Guilford Press. Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. 1996. Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312, 71–72. Safran, J. D., Abreu, I., Ogilvie, J., & DeMaria, A. 2011. Does psychotherapy research influence the clinical practice of researcher–clinicians? Clinical Psychology: Science and Practice, 18, 357–371. Safran, J., Greenberg, L. S., & Rice, L. 1988. Integrative psychotherapy research and practice: Modeling the change process. Psychotherapy, 25, 1–17. Schön, D. A. 1983. The reflective practitioner: How professionals think in action. New York: Basic Books. Shakow, D., Hilgard, E. R., Kelly, E. L., Luckey, B., Sanford, R. N., & Shaffer, L. F. 1947. Recommended graduate training program in clinical psychology. American Psychologist, 2, 539–558. Silberschatz, G., & Curtis, J. T. 1993. Measuring the therapist’s impact on the patient’s therapeutic progress. Journal of Consulting and Clinical Psychology, 61, 403–411. Slife, B. D., & Gantt, E. 1999. Methodological pluralism: A framework for psychotherapy research. Journal of Clinical Psychology, 55, 1–13. Smith, M. L., & Glass, G. V. 1977. Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760. Smith, M. L., Glass, G. V., & Miller, T. L. 1980. Benefits of psychotherapy. Baltimore: Johns Hopkins University Press. Soldz, S., & McCullough, L. (Eds.). 2000. Reconciling empirical knowledge and clinical experience: The art and science of psychotherapy. Washington, DC: American Psychological Association. Spence, D. P. 2001. Dangers of anecdotal reports. Journal of Clinical Psychology, 57, 37–41. 112 IWAKABE

Stegall, S. D., & Nangle, D. W. 2005. Successes and failures in the implementation of a manualized treatment for childhood depression in an outpatient setting. Clinical Case Studies, 4, 227–245. Stiles, W. B. 2005. Case studies. In J. C. Norcross, L. E. Beutler, & R. F. Levant (Eds.), Evidence-based practices in mental health: Debate and dialogue on the fundamental questions (pp. 57–64). Washington, DC: American Psychological Association. Stiles, W. B. 2007. Theory-building case studies of counselling and psychotherapy. Counselling and Psychotherapy Research, 7, 122–127. Strauss, B. M., & Kaechele, H. 1998. The writing on the wall: Comments on the current discussion about empirically validated treatments in Germany. Psychotherapy Research, 8, 158–170. Talley, P. F., Strupp, H. H., & Butler, S. F. (Eds.). 1993. Psychotherapy research and practice: Bridging the gap. New York: Basic Books. Tavris, C. 2003. Foreword. In S. O. Lilienfeld, S. J. Lynn, & J. M. Lohr (Eds.), Science and pseudoscience in clinical psychology. New York: Guilford Press. Task Force for the Development of Guidelines for the Provision of Humanistic Psychosocial Services. 1997. Guidelines for the provision of humanistic psychosocial services. Humanistic Psychologist, 25, 65– 107. Task Force on Promotion and Dissemination of Psychological Procedures. 1995. Training in and dissemination of empirically-validated psychological treatment: Report and recommendations. The Clinical Psychologist, 48, 2–23. Timulak, L. 2007. Identifying core categories of client-identified impact of helpful events in psychotherapy: A qualitative meta-analysis. Psychotherapy Research, 17, 305–314. Wachtel, P. L. 1991. From eclecticism to synthesis: Toward a more seamless psychotherapeutic integration. Journal of Psychotherapy Integration, 1, 43–54. Wampold, B. E. 2001. The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Lawrence Erlbaum. Wampold, B. E., Lichtenberg, J. W., & Waehler, C. A. 2002. Principles of empirically supported interventions in counseling psychology. The Counseling Psychologist, 30, 197–217. Westen, D., & Morrison, K. 2001. A multi-dimensional meta-analysis of treatments for depression, panic, and generalized anxiety disorder: An empirical examination of the status of empirically supported treatments. Journal of Consulting and Clinical Psychology, 69, 875–889. Westen, D., Novotny, C. M., & Thompson-Brenne, H. 2004. The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631–663. Witmer, L. 1996. Clinical psychology. American Psychologist, 51, 248–251. (Original work published 1907).

(Manuscript received 1 April, 2013; Revision accepted 29 April, 2013)