Download Pdf File
Total Page:16
File Type:pdf, Size:1020Kb
4th Edition 6 January, 2016 Clinical Research: Skills clinicians need to maintain effective practices By Richard A. Sherman, PhD The art and science of establishing clinical credibility. Or, how can you tell if that amazing new treatment really works? 1 Cardinal Rules for Establishing Credibility When You: Prepare a Clinical Presentation / Article Listen to / Read a Clinical Presentation / Article Be certain to: Is / are there: 1. Title your presentation / article appropriately so it doesn’t promise more 1. Adequate diagnosis than it can deliver or ascribe changes to one aspect of a multifaceted and assessment of the intervention. subjects? 2. Begin with a brief summary of what you did / found. 2. Adequate pre 3. Describe the general characteristics of the group you worked with and define treatment baseline to your inclusion and exclusion criteria. establish symptom 4. Present how your patients were diagnosed. Don’t fall into the trap of trusting variability? diagnoses by others if such diagnoses are known to be frequently incorrect 3. Objective outcome (e.g. physicians are terrible about correctly diagnosing headaches). Use measures relevant to the recognized criteria so your audience will believe that your patients had the disorder? Were they disorder you claim to be treating. used correctly? 5. Use correct assessment techniques for the disorder (e.g. the MMPI is not 4. Intensity of the valid for establishing the psychological components of low back pain). intervention sufficient 6. Define your assessment so you establish the basis for saying that people to produce an effect? learned the tasks you were teaching during your treatment. E.g., if you are 5. Way to check teaching people to change their muscle tension, show the baseline status then whether the show that those people who improved changed in the desired direction. This intervention was establishes the relationship between the intervention and changes in symptoms. successful (drug taken 7. Use the correct outcome measurers and use them correctly. Review the properly, behavioral literature so you are up to date. For example, 0 – 10 analog pain scales must technique successfully define 10 to have an objective limit such as “would faint if had to bear the learned and then used). pain for one more second” rather than “most pain can imagine”. 6. Sufficient patient- 8. Establish pre and post treatment baselines of sufficient duration to establish subjects so result is symptom variability. E.g. headache baselines need to be between two weeks credible? and a month. This is how you demonstrate effectiveness. 7. Appropriate design 9. Use the correct design for the level of work done on the intervention already. for the question (e.g. E.g. a new idea needs only a baseline – intervention – baseline design while a single group, controls, test of an idea which has been shown to produce changes needs to incorporate believable placebo, a control group to show that changes are not due to non-specific effects. etc.? 10. Include sufficient subjects so your results are likely to be due to the 8. Sufficient descriptive intervention rather than chance variability. statistics so results are 11. Clearly explain what your intervention was and have some way to know clear? that there was sufficient intensity to have a chance of causing a change. E.g. 9. Long enough follow- one relaxation session isn’t likely to cure anything. up so duration of results 12. Present your results clearly with graphics rather than just tables. Show can be established? sufficient descriptive statistics so people decide what happened. 10. In a multifaceted 13. Never ascribe symptom changes to one aspect of a multifaceted intervention, were any intervention when you have no way to tease out the effect of that aspect. E.g. if changes in symptoms you gave relaxation training and biofeedback, don’t say that the changes were ascribed to one element due to biofeedback. of the intervention 14. Not worry about the need to prove an underlying mechanism for the when there is no way to technique you used. All you need to do in a clinical presentation is demonstrate differentiate the effects that a change did take place. Other types of research demonstrate how & why. of each part? 2 Contents Inside Front Cover Table on cardinal rules for establishing credibility 2 Introduction 6 1. Rationale - the crucial need to understand and use clinical research 6 2. The scientific method 12 Section A. The need to know what you are doing 14 Chapter 1. The basic steps and time-line of a project 14 Chapter 2. Defensive reading of clinical literature - does the hypothesis make sense? 22 Chapter 3. Protocol development - formulating and maturing a question 25 Chapter 4. Background and literature searches 37 Chapter 5. Determining feasibility 40 Chapter 6. Research ethics 42 Chapter 7. The research protocol approval process 69 Chapter 8. Pitfalls in the first steps – can you trust interpretation of the data by people who are not neutral? 74 Practical exercises for Section A 76 Section B. Basic study structures for the office and clinic environment 78 Chapter 9. The logic and progression of designs 79 Chapter 10. Exploratory single subject and single group designs 87 Chapter 11. Observational studies - longitudinal & cross-sectional designs 91 Chapter 12. Prospective experimental study designs 94 Chapter 13. Outcome and quality of life studies 100 Chapter 14. The protocol's research plan and design 104 Chapter 15. Defensive reading of clinical literature - does the design fit the needs? 106 Chapter 16. Pitfalls in study design 108 Practical exercises for section B 112 Section C. Establishing the credibility of data and clinical publications 113 Chapter 17. Subject selection techniques - sampling, inclusion - exclusion 114 Chapter 18. Hardening subjective data 119 Chapter 19. Validity and reliability - defensive data entry 128 Chapter 20. Survey, test, and questionnaire design 137 Chapter 21. Defensive reading of clinical literature - can you trust the subjects and data? 152 Chapter 22. Pitfalls in data gathering methodology 153 Practical exercises for section C 156 3 Section D. Statistics for evaluating literature & interpreting clinical data 157 Chapter 23. Concepts of clinical data analysis 157 Chapter 24. Descriptive statistics for evaluating clinical data 161 Chapter 25. Probability and significance testing 171 Chapter 26. Decision / Risk analysis and evaluating relative risk 174 Chapter 27. Power analysis - determining the optimal number of subjects 180 Chapter 28. Evaluation of overlap between groups through inferential statistics 187 Chapter 29. Evaluation of relationships between changing variables 195 Chapter 30. Dichotomous and proportional data 205 Chapter 31. Outliers - data points that don't meet expectations 210 Chapter 32. Pattern analysis 212 Chapter 33. Survival / life table analysis 215 Chapter 34. Establishing Efficacy 218 Chapter 35. Establishing Cause and Effect 225 Chapter 36. Defensive reading of clinical literature: Handling the data 242 Chapter 37. Pitfalls in study analysis 247 Practical exercises for section D 250 Section E. Synthesizing the elements to produce an effective study 252 Chapter 38. The protocol - incorporating statistics 252 Chapter 39. The grant - extramural funding process 255 Chapter 40 Writing and presenting the study 260 Chapter 41. The publication submission and review process 267 Chapter 42. Changing clinical practice based on what you did and read 273 Chapter 43. Defensive reading of clinical literature - does the conclusion match the raw data, the data analysis, the hypothesis, and the background? 275 Chapter 44. Pitfalls in the overall research process 279 Practical exercises for section E 280 Section F. Glossary of research and statistical terms 281 Section G: Typical protocol format and structure 300 Section H: Sample protocols and papers 308 Sample 1. Protocol and consent form for a pilot study 308 Sample 2: Protocol and consent form for a controlled study 326 Sample 3. Unacceptable protocol and consent form 337 Sample 4. Protocol using non-human animal subjects 342 Sample 5. Poor paper 354 Sample 6. Paper based on pilot study in sample 1 359 Sample 7. Grant application based on pilot study in sample 1 366 Section I: An algorithm for the steps in evaluating clinical protocols and articles 384 4 Section J: Can you trust the interpretation of study results by people who have not adequately tested their assumptions or are not neutral? 388 Section K: Use of Effect Size Calculations to determine efficacy of biofeedback based interventions 392 Section L: Further reading and References 448 Publishing and copying information, about the author 453 5 Introduction 1. Rationale: A. So, you're a "dyed-in-wool," "carved-in-stone" clinician who knows there is always room for improving your skills and that innovations in practice are always thundering urgently over the horizon. The question is how to figure out which of those shinny new "innovations" you are deluged with warrant changing your practice or taking a course to learn some new skills in order to apply them. In other words, how do you separate the wheat from the chaff - or the good ideas which are still half baked from those ready to eat? How do you know your own interventions are effective - or if they could be even more effective? That's what this book is all about. The bottom line is that you need to have a working grasp of the knowledge, skills, and concepts associated with research in order to: 1. Develop, extend, and change your clinical practice through knowledgeable assessment of the clinical literature and presentations. In other words, You need to be able to decide whether to change your clinical practice based on what you read and hear. 2. Track your ongoing clinical work to spot the holes in daily practice and determine which interventions are effective on both a "case-by-case" and "disorder-by-disorder" basis.