Systematic Reviews in the Social Sciences a PRACTICAL GUIDE

Total Page:16

File Type:pdf, Size:1020Kb

Systematic Reviews in the Social Sciences a PRACTICAL GUIDE Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_01_pre-toc Final Proof page 3 4.8.2005 1:07am Systematic Reviews in the Social Sciences A PRACTICAL GUIDE Mark Petticrew and Helen Roberts Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_01_pre-toc Final Proof page 1 4.8.2005 1:07am Systematic Reviews in the Social Sciences Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_01_pre-toc Final Proof page 2 4.8.2005 1:07am Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_01_pre-toc Final Proof page 3 4.8.2005 1:07am Systematic Reviews in the Social Sciences A PRACTICAL GUIDE Mark Petticrew and Helen Roberts Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_01_pre-toc Final Proof page 4 4.8.2005 1:07am ß 2006 by Mark Petticrew and Helen Roberts BLACKWELL PUBLISHING 350 Main Street, Malden, MA 02148-5020, USA 9600 Garsington Road, Oxford OX4 2DQ, UK 550 Swanston Street, Carlton, Victoria 3053, Australia The right of Mark Petticrew and Helen Roberts to be identified as the Authors of this Work has been asserted in accordance with the UK Copyright, Designs, and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photo- copying, recording or otherwise, except as permitted by the UK Copyright, Designs, and Patents Act 1988, without the prior permission of the publisher. First published 2006 by Blackwell Publishing Ltd 1 2006 Library of Congress Cataloging-in-Publication Data Petticrew, Mark. Systematic reviews in the social sciences : a practical guide/Mark Petticrew and Helen Roberts. p. cm. Includes bibliographical references and index. ISBN-13: 978-1-4051-2110-1 (hard cover: alk. paper) ISBN-10: 1-4051-2110-6 (hard cover: alk. paper) ISBN-13: 978-1-4051-2111-8 (pbk. : alk. paper) ISBN-10: 1-4051-2111-4 (pbk. : alk. paper) 1. Social sciences—Research—Methodology. 2. Social sciences—Statistical methods. I. Roberts, Helen, 1949–II. Title. H62.P457 2005 300’.72’3—dc22 2005011632 A catalogue record for this title is available from the British Library. Set in 10.5pt/12.5pt Bembo by SPI Publisher Services, Pondicherry, India Printed and bound in the United Kingdom by TJ International, Padstow, Cornwall The publisher’s policy is to use permanent paper from mills that operate a sustainable forestry policy, and which has been manufactured from pulp processed using acid-free and elementary chlorine-free practices. Furthermore, the publisher ensures that the text paper and cover board used have met acceptable environmental accreditation standards. For further information on Blackwell Publishing, visit our website: www.blackwellpublishing.com Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_02_toc Final Proof page 5 4.8.2005 1:07am Contents Foreword – William R. Shadish vi Acknowledgments x Preface xiii Chapter 1 Why do we need systematic reviews? 1 Chapter 2 Starting the review: Refining the question and defining the boundaries 27 Chapter 3 What sorts of studies do I include in the review? Deciding on the review’s inclusion/exclusion criteria 57 Chapter 4 How to find the studies: The literature search 79 Chapter 5 How to appraise the studies: An introduction to assessing study quality 125 Chapter 6 Synthesizing the evidence 164 Chapter 7 Exploring heterogeneity and publication bias 215 Chapter 8 Disseminating the review 247 Chapter 9 Systematic reviews: Urban myths and fairy tales 265 Glossary 277 Appendix 1 The review process (and some questions to ask before starting a review) 284 Appendix 2 MOOSE Guidelines 288 Appendix 3 Example of a flow diagram from a systematic review 291 Appendix 4 Example data extraction form 293 Appendix 5 Variations in the quality of systematic reviews 296 Bibliography 298 Index 324 Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_03_posttoc Final Proof page 6 4.8.2005 1:06am Foreword The idea of evidence-based practice is not new. For example, nearly 60 years ago, the scientist-practitioner model of clinical psychology in the US was based in part on the idea that clinical psychologists would scientifically investigate the effectiveness of their own treatments, and use what proved to be effective. Around 100 years ago, the British Army asked Sir Karl Pearson to review the evidence about the effects of a new typhoid vaccine, to help them decide whether to adopt that vaccine more widely.1 Such relatively isolated examples are common for hundreds of years. What is new is the widespread acceptance that evidence-based practice is practice whose time has come. Whether it be through the Cochrane Collaboration, the Camp- bell Collaboration, the What Works Clearinghouse in the US Department of Education, or any of a host of similar efforts, the idea that practices and policies that are supported by scientific evidence can be identified and disseminated has now been institutionalized. That in itself is a remarkable accomplishment. In this context, this book by Petticrew and Roberts is a timely gem, full of detailed history, pithy quotes, marvelous examples, compelling argu- ments, practical advice, and transparent demonstrations of how to do one of the most common ways of putting the evidence into evidence-based practice – the systematic review. Practical advice leaps out at the reader from every chapter, with novel information I have not seen in related works. Let me start with reference to the key justification for systematic reviews, that ‘‘single studies taken in isolation are often seriously misleading’’ (chapter 1, this volume). Or as I tend to put it: Never trust the results of single studies. One need only graph a frequency distribution of effect sizes to see that some studies will show unusually positive effects, and some un- usually negative effects, substantially by virtue of chance. Yet the public, the media, and even other scientists routinely place great faith in single studies. Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_03_posttoc Final Proof page 7 4.8.2005 1:06am FOREWORD vii As a result, ineffective or inaccurate policies, practices, or ideas are adopted, accurate or effective ones are dismissed, and all parties get confused. The analogy that Petticrew and Roberts make with a survey is a good one. We would never take seriously the results of a sample survey where the sample was of a single person, and we all understand why – we will often get a different answer by asking a different person. The same is true in systematic reviews. We will often get a different answer when we look at a different study. In sample surveys, only a large sample is capable of giving us a reliable answer. In reviewing literature on a question, only the systematic review process is capable of helping to clarify where the answer really lies. Yet as Petticrew and Roberts point out in chapter 2 and elaborate in chapter 3, we have had an unnecessarily narrow view of what a systematic review can do – answer questions about cause and effect relationships. Systematic reviews need not only address what works, as important as that question is. Such reviews can address the evidence about nearly any kind of question, for example, about the prevalence of a problem, the correlation of two variables to each other, the implementation of a program, the meaning program stakeholders ascribe to a program or its context, the causal mediating processes that help us understand why it works, and program costs, to name just a few. Similarly, our understanding of the question of what works has often been too narrow. That question involves high quality evidence about cause and effect relationships, to be sure. But we are also informed about what works by, for example, knowledge that some stake- holder groups refuse to participate, that costs are too high to be practical, or that the need for the treatment is questionable. What is less clear is whether we have well-developed technologies for addressing some of these questions. The technological problem is twofold, how to integrate studies that are all of one kind (e.g., about need) with each other, and how to integrates studies of one kind (e.g., need) with studies of another kind (e.g., treatment effectiveness). The former problem is relat- ively unproblematic for most kinds of quantitative studies, but is far less clear for qualitative studies. The latter problem is almost completely unaddressed. It is this context that makes me so pleased to see the effort Petticrew and Roberts have put into both these problems. Many people talk about the value of integrating quantitative and qualitative studies, for example, but actual examples of such integration are few and far between. Petticrew and Roberts lay out the logic of such integrations, some of the methods that might be used for doing them, along with examples. And they don’t shy away from the difficulties such as a lack of standardized search terms to use in locating qualitative studies on a given topic. For many years, when critics have decried the narrow focus on experiments in systematic reviews of what Petticrew / Assessing Adults with Intellecutal Disabilities 1405121106_03_posttoc Final Proof page 8 4.8.2005 1:06am viii FOREWORD works, I have responded by saying we are working on that problem. Now I can actually point to the work. I was also pleased to see the huge number of references in chapter 5 about assessing the quality of different kinds of research designs. Just the compila- tion itself is a major contribution. Yet we should remember that quality can mean two different things: (a) quality of the implementation of the method (e.g., did the primary study author implement the qualitative method properly); and (b) quality of the inferences from the method (do the methods used support a causal inference).
Recommended publications
  • Higher Education Reform: Getting the Incentives Right
    Higher Education Reform: Getting the Incentives Right CPB Netherlands Bureau for Economic Policy Analysis CHEPS Van Stolkweg 14 University of Twente P.O. Box 80510 P.O. Box 217 2508 GM The Hague, The Netherlands 7500 AE Enschede, the Netherlands ISBN 90 5833 065 6 2 Contents Contents Preface 9 Introduction 11 1 The Dutch higher education system 15 1.1 Binary system 15 1.2 Formal tasks 16 1.3Types of institutions 16 1.4 Funding structure 17 1.5 Public expenditures on higher education 19 1.6 Tuition fee policies 21 1.7 Student support system 23 1.8 Admission policies 24 1.9 Quality control 25 1.10 Enrollment 26 Annex:Public funding of higher education in the Netherlands, performance-based models 29 2 Economics of higher education 35 2.1 Why do people attend higher education? 35 2.1.1 The human capital approach 35 2.1.2 The signalling approach 36 2.1.3How high are the financial and non-financial returns to higher education? 36 2.2 Why public support of higher education? 38 2.2.1 Human capital spillovers 38 2.2.2 Capital market constraints 39 2.2.3Risk 40 2.2.4 Imperfect information and transparency 41 2.2.5 Income redistribution 42 2.2.6 Tax distortions 42 2.3How to fund higher education? 42 2.3.1 Student support 43 2.3.2 Funding of higher education institutions 43 2.4 Public versus private provision of higher education 44 2.5 Should the higher education sector be deregulated? 45 2.6 Why combine education and research in universities? 46 5 Higher Education Reform: Getting the Incentives Right 2.7 Why and when should research be publicly funded?
    [Show full text]
  • 1 Data Policies of Highly-Ranked Social Science
    1 Data policies of highly-ranked social science journals1 Mercè Crosas2, Julian Gautier2, Sebastian Karcher3, Dessi Kirilova3, Gerard Otalora2, Abigail Schwartz2 Abstract By encouraging and requiring that authors share their data in order to publish articles, scholarly journals have become an important actor in the movement to improve the openness of data and the reproducibility of research. But how many social science journals encourage or mandate that authors share the data supporting their research findings? How does the share of journal data policies vary by discipline? What influences these journals’ decisions to adopt such policies and instructions? And what do those policies and instructions look like? We discuss the results of our analysis of the instructions and policies of 291 highly-ranked journals publishing social science research, where we studied the contents of journal data policies and instructions across 14 variables, such as when and how authors are asked to share their data, and what role journal ranking and age play in the existence and quality of data policies and instructions. We also compare our results to the results of other studies that have analyzed the policies of social science journals, although differences in the journals chosen and how each study defines what constitutes a data policy limit this comparison. We conclude that a little more than half of the journals in our study have data policies. A greater share of the economics journals have data policies and mandate sharing, followed by political science/international relations and psychology journals. Finally, we use our findings to make several recommendations: Policies should include the terms “data,” “dataset” or more specific terms that make it clear what to make available; policies should include the benefits of data sharing; journals, publishers, and associations need to collaborate more to clarify data policies; and policies should explicitly ask for qualitative data.
    [Show full text]
  • “Dysrationalia” Among University Students: the Role of Cognitive
    “Dysrationalia” among university students: The role of cognitive abilities, different aspects of rational thought and self-control in explaining epistemically suspect beliefs Erceg, Nikola; Galić, Zvonimir; Bubić, Andreja Source / Izvornik: Europe’s Journal of Psychology, 2019, 15, 159 - 175 Journal article, Published version Rad u časopisu, Objavljena verzija rada (izdavačev PDF) https://doi.org/10.5964/ejop.v15i1.1696 Permanent link / Trajna poveznica: https://urn.nsk.hr/urn:nbn:hr:131:942674 Rights / Prava: Attribution 4.0 International Download date / Datum preuzimanja: 2021-09-29 Repository / Repozitorij: ODRAZ - open repository of the University of Zagreb Faculty of Humanities and Social Sciences Europe's Journal of Psychology ejop.psychopen.eu | 1841-0413 Research Reports “Dysrationalia” Among University Students: The Role of Cognitive Abilities, Different Aspects of Rational Thought and Self-Control in Explaining Epistemically Suspect Beliefs Nikola Erceg* a, Zvonimir Galić a, Andreja Bubić b [a] Department of Psychology, Faculty of Humanities and Social Sciences, University of Zagreb, Zagreb, Croatia. [b] Department of Psychology, Faculty of Humanities and Social Sciences, University of Split, Split, Croatia. Abstract The aim of the study was to investigate the role that cognitive abilities, rational thinking abilities, cognitive styles and self-control play in explaining the endorsement of epistemically suspect beliefs among university students. A total of 159 students participated in the study. We found that different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs. Based on these findings, it may be suggested that intelligence and rational thinking, although related, represent two fundamentally different constructs.
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • D2.2: Research Data Exchange Solution
    H2020-ICT-2018-2 /ICT-28-2018-CSA SOMA: Social Observatory for Disinformation and Social Media Analysis D2.2: Research data exchange solution Project Reference No SOMA [825469] Deliverable D2.2: Research Data exchange (and transparency) solution with platforms Work package WP2: Methods and Analysis for disinformation modeling Type Report Dissemination Level Public Date 30/08/2019 Status Final Authors Lynge Asbjørn Møller, DATALAB, Aarhus University Anja Bechmann, DATALAB, Aarhus University Contributor(s) See fact-checking interviews and meetings in appendix 7.2 Reviewers Noemi Trino, LUISS Datalab, LUISS University Stefano Guarino, LUISS Datalab, LUISS University Document description This deliverable compiles the findings and recommended solutions and actions needed in order to construct a sustainable data exchange model for stakeholders, focusing on a differentiated perspective, one for journalists and the broader community, and one for university-based academic researchers. SOMA-825469 D2.2: Research data exchange solution Document Revision History Version Date Modifications Introduced Modification Reason Modified by v0.1 28/08/2019 Consolidation of first DATALAB, Aarhus draft University v0.2 29/08/2019 Review LUISS Datalab, LUISS University v0.3 30/08/2019 Proofread DATALAB, Aarhus University v1.0 30/08/2019 Final version DATALAB, Aarhus University 30/08/2019 Page | 1 SOMA-825469 D2.2: Research data exchange solution Executive Summary This report provides an evaluation of current solutions for data transparency and exchange with social media platforms, an account of the historic obstacles and developments within the subject and a prioritized list of future scenarios and solutions for data access with social media platforms. The evaluation of current solutions and the historic accounts are based primarily on a systematic review of academic literature on the subject, expanded by an account on the most recent developments and solutions.
    [Show full text]
  • A Comprehensive Analysis of the Journal Evaluation System in China
    A comprehensive analysis of the journal evaluation system in China Ying HUANG1,2, Ruinan LI1, Lin ZHANG1,2,*, Gunnar SIVERTSEN3 1School of Information Management, Wuhan University, China 2Centre for R&D Monitoring (ECOOM) and Department of MSI, KU Leuven, Belgium 3Nordic Institute for Studies in Innovation, Research and Education, Tøyen, Oslo, Norway Abstract Journal evaluation systems are important in science because they focus on the quality of how new results are critically reviewed and published. They are also important because the prestige and scientific impact of journals is given attention in research assessment and in career and funding systems. Journal evaluation has become increasingly important in China with the expansion of the country’s research and innovation system and its rise as a major contributor to global science. In this paper, we first describe the history and background for journal evaluation in China. Then, we systematically introduce and compare the most influential journal lists and indexing services in China. These are: the Chinese Science Citation Database (CSCD); the journal partition table (JPT); the AMI Comprehensive Evaluation Report (AMI); the Chinese STM Citation Report (CJCR); the “A Guide to the Core Journals of China” (GCJC); the Chinese Social Sciences Citation Index (CSSCI); and the World Academic Journal Clout Index (WAJCI). Some other lists published by government agencies, professional associations, and universities are briefly introduced as well. Thereby, the tradition and landscape of the journal evaluation system in China is comprehensively covered. Methods and practices are closely described and sometimes compared to how other countries assess and rank journals. Keywords: scientific journal; journal evaluation; peer review; bibliometrics 1 1.
    [Show full text]
  • Downloaded Manually1
    The Journal Coverage of Web of Science and Scopus: a Comparative Analysis Philippe Mongeon and Adèle Paul-Hus [email protected]; [email protected] Université de Montréal, École de bibliothéconomie et des sciences de l'information, C.P. 6128, Succ. Centre-Ville, H3C 3J7 Montréal, Qc, Canada Abstract Bibliometric methods are used in multiple fields for a variety of purposes, namely for research evaluation. Most bibliometric analyses have in common their data sources: Thomson Reuters’ Web of Science (WoS) and Elsevier’s Scopus. This research compares the journal coverage of both databases in terms of fields, countries and languages, using Ulrich’s extensive periodical directory as a base for comparison. Results indicate that the use of either WoS or Scopus for research evaluation may introduce biases that favor Natural Sciences and Engineering as well as Biomedical Research to the detriment of Social Sciences and Arts and Humanities. Similarly, English-language journals are overrepresented to the detriment of other languages. While both databases share these biases, their coverage differs substantially. As a consequence, the results of bibliometric analyses may vary depending on the database used. Keywords Bibliometrics, citations indexes, Scopus, Web of Science, research evaluation Introduction Bibliometric and scientometric methods have multiple and varied application realms, that goes from information science, sociology and history of science to research evaluation and scientific policy (Gingras, 2014). Large scale bibliometric research was made possible by the creation and development of the Science Citation Index (SCI) in 1963, which is now part of Web of Science (WoS) alongside two other indexes: the Social Science Citation Index (SSCI) and the Arts and Humanities Citation Index (A&HCI) (Wouters, 2006).
    [Show full text]
  • Facts Are More Important Than Novelty
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/268522706 Facts Are More Important Than Novelty Article in Educational Researcher · August 2014 DOI: 10.3102/0013189X14545513 CITATIONS READS 157 1,131 2 authors: Matthew Makel Jonathan Plucker Duke University Johns Hopkins University 60 PUBLICATIONS 1,422 CITATIONS 205 PUBLICATIONS 5,383 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Excellence Gaps: Addressing Both Equity and Excellence View project Training View project All content following this page was uploaded by Jonathan Plucker on 21 November 2014. The user has requested enhancement of the downloaded file. Educational Researcher http://er.aera.net Facts Are More Important Than Novelty: Replication in the Education Sciences Matthew C. Makel and Jonathan A. Plucker EDUCATIONAL RESEARCHER published online 13 August 2014 DOI: 10.3102/0013189X14545513 The online version of this article can be found at: http://edr.sagepub.com/content/early/2014/07/23/0013189X14545513 Published on behalf of American Educational Research Association and http://www.sagepublications.com Additional services and information for Educational Researcher can be found at: Email Alerts: http://er.aera.net/alerts Subscriptions: http://er.aera.net/subscriptions Reprints: http://www.aera.net/reprints Permissions: http://www.aera.net/permissions >> OnlineFirst Version of Record - Aug 13, 2014 What is This? Downloaded from http://er.aera.net by guest on August 17, 2014 EDRXXX10.3102/0013189X14545513Educational ResearcherMonth XXXX 545513research-article2014 FEATURE ARTICLE Facts Are More Important Than Novelty: Replication in the Education Sciences Matthew C. Makel1 and Jonathan A.
    [Show full text]
  • The Comprehensive Assessment of Rational Thinking
    EDUCATIONAL PSYCHOLOGIST, 51(1), 23–34, 2016 Copyright Ó Division 15, American Psychological Association ISSN: 0046-1520 print / 1532-6985 online DOI: 10.1080/00461520.2015.1125787 2013 THORNDIKE AWARD ADDRESS The Comprehensive Assessment of Rational Thinking Keith E. Stanovich Department of Applied Psychology and Human Development University of Toronto, Canada The Nobel Prize in Economics was awarded in 2002 for work on judgment and decision- making tasks that are the operational measures of rational thought in cognitive science. Because assessments of intelligence (and similar tests of cognitive ability) are taken to be the quintessence of good thinking, it might be thought that such measures would serve as proxies for the assessment of rational thought. It is important to understand why such an assumption would be misplaced. It is often not recognized that rationality and intelligence (as traditionally defined) are two different things conceptually and empirically. Distinguishing between rationality and intelligence helps explain how people can be, at the same time, intelligent and irrational. Thus, individual differences in the cognitive skills that underlie rational thinking must be studied in their own right because intelligence tests do not explicitly assess rational thinking. In this article, I describe how my research group has worked to develop the first prototype of a comprehensive test of rational thought (the Comprehensive Assessment of Rational Thinking). It was truly a remarkable honor to receive the E. L. Thorn- Cunningham, of the University of California, Berkeley. To dike Career Achievement Award for 2012. The list of previ- commemorate this award, Anne bought me a 1923 ous winners is truly awe inspiring and humbling.
    [Show full text]
  • Tanah Datar); Laila Kuznezov, Janes Imanuel Ginting, Gregorius Kelik Agus Endarso, Leonardus B
    38771 Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized Nine Case Studiesfrom Indonesia Nine Case forthePoor: Making Services Work INDOPOV THE WORLD BANK OFFICE JAKARTA Jakarta Stock Exchange Building Tower II/12th Fl. Jl. Jend. Sudirman Kav. 52-53 Jakarta 12910 Tel: (6221) 5299-3000 Fax: (6221) 5299-3111 Website: www.worldbank.or.id THE WORLD BANK 1818 H Street N.W. Washington, D.C. 20433, U.S.A. Tel: (202) 458-1876 Fax: (202) 522-1557/1560 Email: [email protected] Website: www.worldbank.org Printed in 2006. This volume is a product of staff of the World Bank. The fi ndings, interpretations, and conclusions expressed herein do not necessarily refl ect the views of the Board of Executive Directors of the World Bank or the governments they represent. The World Bank does not guarantee the accuracy of the data included in this work. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of the World Bank concerning the legal status of any territory or the endorsement of acceptance of such boundaries. Making Services Work for the Poor : Nine Case Studies from Indonesia Bank Dunia | The World Bank Indonesia Poverty Analysis Program (INDOPOV) Poverty Reduction and Economic Management Unit East Asia and Pacifi c Region Acknowledgements The Making Services Work for the Poor Case Studies series was researched and written by a team led by Stefan Nachuk (EASPR). The document constitutes one part of a larger piece of analytical work designed to examine access to services for the poor in Indonesia.
    [Show full text]
  • Technical Commentary
    TECHNICAL COMMENTARY Eye movement desensitisation and reprocessing Introduction stringent reporting checklists than the PRISMA, and that some reviews may have been limited Eye movement desensitisation and by journal guidelines. reprocessing (EMDR) is based on the observation that the intensity of traumatic Evidence was graded using the Grading of memories can be reduced through eye Recommendations Assessment, Development movements. While the patient focusses on the and Evaluation (GRADE) Working Group traumatic memory or thought, he or she approach where high quality evidence such as simultaneously moves his or her eyes back and that gained from randomised controlled trials forth, following the movement of the therapist’s (RCTs) may be downgraded to moderate or low finger. The exact mechanisms through which if review and study quality is limited, if there is EMDR works are not clear, although it is inconsistency in results, indirect comparisons, proposed that when a traumatic memory is imprecise or sparse data and high probability of activated in working memory, and at the same reporting bias. It may also be downgraded if time the patient focusses on the movement of risks associated with the intervention or other the fingers, the vividness and intensity of the matter under review are high. Conversely, low memory are reduced. This blurred memory is quality evidence such as that gained from restored in the long-term memory, leading to a observational studies may be upgraded if effect less emotional reaction at future activation. An sizes are large or if there is a dose dependent alternative theory suggests EMDR elicits an response. We have also taken into account investigatory reflex which first results in an alert sample size and whether results are consistent, response, and then, when it appears there is no precise and direct with low associated risks threat, produces a sense of relaxation which (see end of table for an explanation of these inhibits negative affect associated with the terms)2.
    [Show full text]
  • Social Science & Medicine
    SOCIAL SCIENCE & MEDICINE AUTHOR INFORMATION PACK TABLE OF CONTENTS XXX . • Description p.1 • Audience p.2 • Impact Factor p.2 • Abstracting and Indexing p.2 • Editorial Board p.2 • Guide for Authors p.6 ISSN: 0277-9536 DESCRIPTION . Social Science & Medicine provides an international and interdisciplinary forum for the dissemination of social science research on health. We publish original research articles (both empirical and theoretical), reviews, position papers and commentaries on health issues, to inform current research, policy and practice in all areas of common interest to social scientists, health practitioners, and policy makers. The journal publishes material relevant to any aspect of health from a wide range of social science disciplines (anthropology, economics, epidemiology, geography, policy, psychology, and sociology), and material relevant to the social sciences from any of the professions concerned with physical and mental health, health care, clinical practice, and health policy and organization. We encourage material which is of general interest to an international readership. The journal publishes the following types of contribution: 1) Peer-reviewed original research articles and critical analytical reviews in any area of social science research relevant to health and healthcare. These papers may be up to 9000 words including abstract, tables, figures, references and (printed) appendices as well as the main text. Papers below this limit are preferred. 2) Systematic reviews and literature reviews of up to 15000 words including abstract, tables, figures, references and (printed) appendices as well as the main text. 3) Peer-reviewed short communications of findings on topical issues or published articles of between 2000 and 4000 words.
    [Show full text]