The Clinical Reasoning Skills of Speech and Language Therapy St

The Clinical Reasoning Skills of Speech and Language Therapy St

<p> SPCON</p><p>Paper presented at the Royal College of Speech and Language Therapists</p><p>Conference "Realising the vision", University of Ulster, 10-12 May 2006</p><p>The Clinical Reasoning Skills of Speech and Language Therapy Students</p><p>Kirsty Hoben1, Rosemary Varley1, and Richard Cox2 </p><p>1 Department of Human Communication Sciences, University of Sheffield,</p><p>Sheffield, UK</p><p>Tel: 0114 2222454</p><p>Fax: 0114 2730547</p><p>Email: [email protected]</p><p>2 Department of Informatics, School of Science and Technology, University of</p><p>Sussex, Falmer, Brighton, UK</p><p>Key words: clinical reasoning, students, speech and language therapy 2</p><p>What is already known on this subject: Studies in medicine and related fields</p><p> have shown that students have difficulties with both knowledge and strategy in</p><p> clinical reasoning tasks. To date, there has been little research in this area in</p><p> speech and language therapy.</p><p>What this study adds: Speech and language therapy students show clinical</p><p> reasoning difficulties similar to those observed in medicine and other related</p><p> domains. They could benefit from explicit teaching of strategies to improve their</p><p> clinical reasoning and consolidate their domain knowledge. 3</p><p>Abstract</p><p>Background Difficulties experienced by novices in clinical reasoning have been well</p><p> documented in many fields, especially medicine (Elstein et al., 1978; Patel and</p><p>Groen, 1986; Boshuizen and Schmidt, 1992, 2000; Rikers et al., 2004). These</p><p> studies have shown that novice clinicians have difficulties with both knowledge</p><p> and strategy in clinical reasoning tasks. Speech and language therapy students</p><p> must also learn to reason clinically, yet to date there is little evidence of how they</p><p> learn to do so. </p><p>Aims In this paper, we report the clinical reasoning difficulties of a group of speech and</p><p> language therapy students. We make a comparison with experienced speech and</p><p> language therapists’ reasoning and propose some methods and materials to aid</p><p> the development of clinical reasoning in speech and language therapy students.</p><p>Methods and procedures Student reasoning difficulties were analysed during assessment of unseen cases</p><p> on an electronic patient database, the Patient Assessment and Training System</p><p>(PATSy)1. Students were videoed as they completed a one hour assessment of</p><p>1 www.patsy.ac.uk 4</p><p> one of three “virtual patients”. One pair of experienced speech and language</p><p> therapists also completed an assessment of one of these cases under the same</p><p> conditions. Screen capture was used to record all on screen activity, along with an</p><p> electronic log of the tests and information accessed and comments entered by the</p><p> students and experienced therapists. These comments were analysed via a seven</p><p> level coding scheme that aimed to describe the events that occur in the process of</p><p> diagnostic reasoning.</p><p>Outcomes and Results Students displayed a range of competence in making an accurate diagnosis.</p><p>Diagnostically accurate students showed increased use of high level professional</p><p> register and a greater use of firm diagnostic statements. For the diagnostically</p><p> inaccurate students typical difficulties were a failure to interpret test results and</p><p> video observations, difficulty in carrying out a sequence of tests consistent with a</p><p> reasoning path, and problems in recalling and using theoretical knowledge.</p><p>Conclusions and Implications We discuss how identification of student reasoning difficulties can inform the</p><p> design of learning materials intended to address these difficulties. </p><p>Introduction</p><p>Previous studies have revealed that novices display a number of difficulties in</p><p> assessing a problem. Much of this research has been in the domain of medicine</p><p> and common difficulties that have been revealed are a tendency to focus on</p><p> surface features of problems (Sloutsky and Yarlas, 2000), and an inflexible</p><p> application of knowledge and strategies to the diagnostic problem (Mavis et al., 5</p><p>1998). Novices are less likely than experts to be aware of confounding information</p><p>(e.g., that a language assessment may tap multiple aspects of communication),</p><p> and are more likely to be data driven than theory driven in their planning. They</p><p> also tend to begin tasks without clear goals (Klahr, 2000). Furthermore, novices</p><p> are less able to evaluate their progress or results from a task (Hmelo-Silver et al.,</p><p>2002), and have difficulty modifying or abandoning hypotheses in the face of</p><p> contradictory evidence (Arocha and Patel, 1995). They can have difficulty</p><p> distinguishing relevant information in a problem (Shanteau, 1992; Cholowski and</p><p>Chan, 2001), and may not have well elaborated schemata of diagnoses and</p><p> patterns of presenting problems (Patel et al., 1997, 2000; Boshuizen and Schmidt,</p><p>2000). Novices can be slow in decision making and hypothesis generation (O'Neill</p><p> et al., 2005; Joseph and Patel, 1990), and may harbour misconceptions or over-</p><p> simplifications of domain-specific concepts which consequently affect</p><p> interpretation of results (Patel et al., 1991; Schauble, 1996).</p><p>McAllister & Rose (2000) acknowledge the relative paucity of research into</p><p> the processes of clinical reasoning in speech and language therapy. However,</p><p> there are similarities in the global characteristics of diagnostic reasoning across</p><p> related professions such as medicine, physiotherapy, occupational therapy and</p><p> nursing. It is likely, therefore, that speech and language therapy novices will</p><p> display similar reasoning difficulties to those observed in novices from other</p><p> clinical domains.</p><p>The current research examined speech and language therapy students’</p><p> developing clinical reasoning skills. Clinical reasoning involves both domain-</p><p> specific knowledge and reasoning (i.e., knowledge pertaining directly to speech</p><p> and language therapy) and domain-general reasoning (i.e., reasoning skills that</p><p> any person could be expected to have). The current research used an existing 6</p><p> database of speech and language therapy cases, the Patient Assessment and</p><p>Training System (PATSy) (Lum and Cox, 1998). The database consists of “virtual</p><p> patients”, and includes video clips, medical history, assessment results and links</p><p> to related publications. Students are able to “administer” tests to patients and keep</p><p> a log of their findings and conclusions. </p><p>Methods</p><p>Participants: The study recruited 34 masters level and undergraduate speech and</p><p> language therapy students (8/34 participants were masters level students) from</p><p> two UK universities via posters on notice boards and email. Undergraduate</p><p> students were in year three of their studies and masters level students were in</p><p> their second year of study. In addition, two experienced speech and language</p><p> therapists took part. University ethical approval was granted for the conduct of the</p><p> research and all usual ethical practices were observed.</p><p>Procedures: Students and experienced therapists worked in pairs (dyads pairings</p><p> were mostly self-selected). They were given one hour to undertake the diagnosis</p><p> of one of three pre-selected PATSy cases; DBL, RS, both acquired cases, or JS1,</p><p> a developmental case. The PATSy cases used for the study all exhibited a degree</p><p> of ambiguity in their clinical presentation i.e., their behavioural profile might be</p><p> consistent with a number of possible diagnoses of underlying impairment.</p><p>Participants were asked to produce a set of statements that described key</p><p> impairments shown by the case, and if possible, an overall diagnostic category. </p><p>The diagnostic process of the students was video-recorded and all participants</p><p> completed a learning log that is automatically generated and stored within PATSy. 7</p><p>A screen capture was also performed, allowing subsequent synchronised playback</p><p> of the video, audio and screen activity, using NITE tools2, developed at the</p><p>University of Edinburgh. </p><p>Analyses</p><p>Prior to the coding of data, student pairs were independently categorised as</p><p> diagnostically accurate (DA) or inaccurate (DI) based on whether they reached a</p><p> diagnosis that was at an appropriate level for a novice/newly qualified speech and</p><p> language therapist. DA students were those that were able to report key</p><p> impairments displayed by the case (e.g., type and extent of lexical retrieval deficit).</p><p>Similarly, the tests selected by a pair were evaluated to determine if the dyad was</p><p> using tests that were relevant to the behavioural impairments shown by the case</p><p> and to the comments they had made in dialogue and their written log. In addition,</p><p> test choices were examined for relatedness and movement along a diagnostic</p><p> path. For example, a test sequence involving a switch from a picture semantics</p><p> task such as Pyramids and Palm Trees (Howard and Patterson, 1992) to non–</p><p> word repetition in the context of discussion of written word comprehension was</p><p> classed as an unrelated sequence. The performance of a subset of student pairs</p><p> was compared to that of experienced clinicians diagnosing the aphasic case DBL. </p><p>The statements made by participants in dialogue with their partner and in the</p><p> written log were coded for particular statement types that might occur in diagnostic</p><p> reasoning. The coding scheme contained seven categories. (See Appendix One</p><p> for definitions and examples of each category).</p><p>2 http://www.ltg.ed.ac.uk/NITE/ 8</p><p>Level Zero: Other</p><p>This category included ambiguous statements and hypotheses that could not be</p><p> tested with the data available on the PATSy system.</p><p>Level One: Reading of data </p><p>This category included statements that consisted of reading aloud data without</p><p> making any comment or additional interpretation.</p><p>Level Two: Making a concrete observation </p><p>This category included statements about a single piece of data which did not use</p><p> any professional terminology </p><p>Level Three: Making a superordinate level clinical observation </p><p>This category contained descriptive statements which extrapolated to a higher</p><p> level concept, </p><p>Level Four: Hypothesis</p><p>This category included statements that expressed a predicted causal relationship</p><p> between two factors. </p><p>Level Five: General diagnostic statement</p><p>Statements in this category consisted of those which included or excluded a</p><p> superordinate diagnostic category and were of the type that might be used in a</p><p> report to another professional, rather than a speech and language therapist.</p><p>Level Six: Specific diagnostic statement</p><p>Statements in this category shared the characteristics of Level Five diagnostic</p><p> statements. However, statements at this level had a finer granularity of description</p><p> than Level Five statements and might be used in a report to another speech and</p><p> language therapist. 9</p><p>Intra-rater reliability was assessed on codings with a time interval of 4 months</p><p> between categorisations. A Kappa score of 0.970 was achieved, indicating highly</p><p> satisfactory intra-rater reliability. Inter-rater reliability was established by two raters</p><p> independently coding 30% of the dialogue data sample. One rater was blind to the</p><p>PATSy case, participants and site at which data was collected, although</p><p> occasionally the nature of the discussion about the cases, particularly the</p><p> paediatric case, made it impossible to be blind to the case. A Kappa score of</p><p>0.888 was achieved, indicating satisfactory inter-rater reliability.</p><p>Results</p><p>Eight pairs of students were categorised as being diagnostically accurate (DA).</p><p>The remaining nine pairs did not produce a diagnosis that was viewed as accurate</p><p> for a novice clinician. The difficulties displayed by the diagnostically inaccurate</p><p> sub-group were: a failure to interpret test results and video observations, difficulty</p><p> in carrying out a sequence of tests consistent with a reasoning path, and problems</p><p> in recalling and using theoretical knowledge.</p><p>Table 1 displays the average number of statements per dyad for each type</p><p> produced by the DA and DI subgroups. The data reveal some disparities between</p><p> the groups: the DI group had more statements at the lower, more descriptive</p><p> levels, but had fewer statements at Level Six. </p><p>INSERT TABLE 1 ABOUT HERE 10</p><p>The same data are displayed in a column graph in Figure 1. Student use of Level</p><p>Three, Four and Five, statements, that is, superordinate statements using</p><p> professional terminology, statements postulating relationships between two</p><p> variables, and general diagnostic statements appeared with similar frequency in</p><p> the two subgroups. The DA group produced more Level Six statements where the</p><p> diagnosis was expressed in professional terminology of fine granularity. This</p><p> suggested that this cohort could link the patterns of behaviour observed in the</p><p> patient case to highly specific domain knowledge i.e., these students could bridge</p><p> the gap between theory and practice. </p><p>INSERT FIGURE ONE ABOUT HERE</p><p>The subset of students (dyads N=6) who diagnosed case DBL were compared to</p><p> the experienced therapist pair who evaluated the same case. The results are</p><p> presented in Table Two. </p><p>INSERT TABLE TWO ABOUT HERE</p><p>Table Two shows that the experienced therapists did not make Level Zero or Level</p><p>One statements. They make very few Level Two statements but a greater number</p><p> of Level Three statements, compared to either of the student groups. Experienced</p><p> therapists also made a higher proportion of firm diagnostic statements at Levels</p><p>Five and Six, compared to either of the student groups. Student results on case 11</p><p>DBL conform to the general pattern observed across all PATSy cases. Again, DA</p><p> students made fewer Level One and Two statements and more Level Six</p><p> statements. The profile of the DA students was more similar to that of experienced</p><p> clinicians than that of the DI group. </p><p>Further qualitative analyses of student performance revealed a number of themes</p><p> indicative of problems in diagnostic reasoning. For example, some students</p><p> displayed few well elaborated schemata of diagnoses, leading to difficulties in</p><p> making sense of data:</p><p>“She was making errors, wasn’t she, in producing words but I haven’t really found</p><p> any pattern yet”.</p><p>“ It’s hard to know what to look at isn’t it? What means something and what</p><p> doesn’t” </p><p>“I’m not entirely sure why they’re (client’s responses) not very appropriate”</p><p>The high numbers of Level One and Two statements in the DI group reflect</p><p> problems in this area: the patient’s behaviours are noted, but the students have</p><p> difficulty interpreting their significance or relationship. Some students showed</p><p> difficulty in carrying out a sequence of tests consistent with a reasoning path, for</p><p> example one dyad chose the following test sequence at the beginning of their</p><p> assessment of the paediatric case JS1: a handwriting sample, a non-word reading</p><p> test followed by a word reading test and then a questionnaire on the client’s social</p><p> and academic functioning. They started with marginally relevant and relatively fine</p><p> grained tests before going on to look at the questionnaire. In this case, the</p><p> questionnaire gave useful background information about broad areas of difficulty</p><p> for the client. Evaluating this evidence would have been more useful at the</p><p> beginning of their assessment as it allows the clinician to “reduce the problem</p><p> space” in which they are working and to focus their diagnostic effort on areas that 12</p><p> are more likely to be crucial to the understanding of the case. No hypotheses or</p><p> specific clinical reasons for these tests were given by the students, indicating that</p><p> they were not using the tests to attempt to confirm or disconfirm a hypothesis</p><p> about the case they were diagnosing. Their approach was descriptive, rather than</p><p> theory or hypothesis-driven.</p><p>Discussion The many studies of clinical reasoning in other related domains provide</p><p> evidence that there may be common patterns of development from novice to</p><p> expert that the speech and language therapy profession can learn from and</p><p> contribute to as researchers. Empirically and theoretically supported resources</p><p> could be developed, such as “intelligent tutors”, using hypermedia support to allow</p><p> novice speech and language therapy students to learn in a “virtual” situation, thus</p><p> allowing them to be better prepared when interacting with real patients. </p><p>The analysis of the data presented here has led to a number of ideas for</p><p> enhancing students’ clinical reasoning, which offer potential for use as formative</p><p> assessment tools for educators, but also as self-assessment tools for students. </p><p>For example, making students aware of the types of statement described in the</p><p> coding scheme presented here could provide a structure for self monitoring and</p><p> assessment, enabling students to evaluate and develop their own reasoning skills.</p><p>A student making Level Two statements could use the descriptors in the coding</p><p> scheme to develop those types of statements into Level Three statements, for</p><p> example, from “Scores worse when words are involved” (Level Two) to “Worse at</p><p> accessing semantic information from written form” (Level Three). 13</p><p>A hypothesis can be defined as a predicted causal relationship between two</p><p> factors, with an explicit or implicit “if…then” structure. Clarifying this interpretation</p><p> of a hypothesis and the type of testing behaviour that it could trigger might help</p><p> students to develop testable, pertinent hypotheses that should in turn, make the</p><p> assessment process more efficient and complete. For example, from “Could it be</p><p>Asperger’s?” to “If the patient had Asperger Syndrome, we would expect to see</p><p> evidence of relatively intact language and cognitive abilities but difficulty in</p><p> communicating, social relationships, and imagination.”</p><p>A resource currently under development consists of a Test Description</p><p>Language Graphical Tool, which is a computer-based interactive tree-diagram of</p><p> the cognitive sub-processes associated with language comprehension and</p><p> production. Currently a stand-alone programme, this could be presented on the</p><p> web within PATSy, either with the sub-processes for a particular test already</p><p> highlighted or students could highlight on the diagram the processes they believed</p><p> to be probed by a particular test. If this helped students to become more aware of</p><p> the content of a test then it could facilitate theory and hypothesis-driven reasoning</p><p> during the assessment process.</p><p>Students could be prompted to make superordinate clinical observations and a</p><p> tentative hypothesis early in an assessment session. After making a hypothesis,</p><p> students could be prompted about a suitable test either before they had made a</p><p> choice, or immediately afterwards if they chose an inappropriate test for their</p><p> hypothesis. For students using PATSy, these prompts could take the form of a</p><p> video showing two students discussing a relevant topic. After a series of tests,</p><p> students could be prompted to attempt a firm diagnostic statement. Again, within</p><p>PATSy, video clip examples of students doing this could be offered concurrently. 14</p><p>McAllister and Rose (2000) promote and describe curriculum interventions</p><p> designed to make the clinical reasoning process conscious and explicit, without</p><p> separating it from domain knowledge. They claim that this helps students to</p><p> integrate knowledge and reasoning skills. Whilst this is not a universally shared</p><p> opinion (e.g., Doyle, 1995, 2000), the results described here indicate that students</p><p> at all levels may benefit from explicit teaching of strategies to improve their clinical</p><p> reasoning and consolidate their domain knowledge.</p><p>References: </p><p>AROCHA, J. F. and PATEL, V. L., 1995, Novice diagnostic reasoning in medicine:</p><p>Accounting for clinical evidence. Journal of the Learning Sciences, 4, 355-384.</p><p>BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 2000, The development of clinical</p><p> reasoning expertise. In J. Higgs, and M. Jones, (eds), Clinical Reasoning in the</p><p>Health Professions (Butterworth Heinemann, Edinburgh), pp.15-22.</p><p>BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 1992, On the role of biomedical</p><p> knowledge in clinical reasoning by experts, intermediates and novices. Cognitive</p><p>Science, 16, 153-184.</p><p>CHOLOWSKI, K. M. and CHAN, L. K. S. 2001, Prior knowledge in student and</p><p> experienced nurses' clinical problem solving. Australian Journal of Educational</p><p> and Developmental Psychology, 1, 10-21. 15</p><p>DOYLE, J. 1995, Issues in teaching clinical reasoning to students of speech and</p><p> hearing science. In J. Higgs, and M. Jones, (eds), Clinical Reasoning in the Health</p><p>Professions (Butterworth Heinemann, Edinburgh), pp. 224-234.</p><p>DOYLE, J. 2000, Teaching clinical reasoning to speech and hearing students. In J.</p><p>Higgs, and M. Jones, (eds), Clinical Reasoning in the Health Professions</p><p>(Butterworth-Heinemann, Edinburgh), pp. 230-235.</p><p>ELSTEIN, A.S., SHULMAN, L.S and SPRAFKA, S.A. 1978, Medical Problem</p><p>Solving: An Analysis of Clinical Reasoning. (Cambridge, MA: Harvard University</p><p>Press).</p><p>HMELO-SILVER, C., NAGARAJAN, A. and DAY, R. S. 2002, ‘‘It’s Harder than We</p><p>Thought It Would be”: A Comparative Case Study of Expert-Novice</p><p>Experimentation Strategies Science Education, 86, 219-243.</p><p>HOWARD, D. and PATTERSON, K. 1992, Pyramids and Palm Trees: a test of</p><p> semantic access from words and pictures. (Bury St Edmonds: Thames Valley Test</p><p>Company).</p><p>JOSEPH, G.-M. and PATEL, V. L. 1990, Domain knowledge and hypothesis</p><p> generation in diagnostic reasoning. Medical Decision Making, 10, 31-46.</p><p>KAHNEMAN, D., SLOVIC, P. and TVERSKY, A., 1982, Judgement under</p><p> uncertainty: Heuristics and biases, (New York: Cambridge University Press). 16</p><p>KLAHR, D., 2000, Exploring science The Cognition and development of Discovery</p><p>Processes, (Massachusetts: The MIT Press). </p><p>LUM, C. and COX, R. 1998, PATSy - A distributed multimedia assessment training</p><p> system International Journal of Communication Disorders, 33, 170-175.</p><p>MAVIS, B. E., LOVELL, K. L. and OGLE, K. S. 1998, Why Johnnie Can't Apply</p><p>Neuroscience: Testing Alternative Hypotheses Using Performance-Based</p><p>Assessment. Advances in Health Sciences Education, 3, 165-175.</p><p>MCALLISTER, L. and ROSE, M. (2000) In J. Higgs and M. Jones, (eds.) Clinical</p><p>Reasoning in the Health Professions, (Butterworth-Heinemann, Edinburgh), pp.</p><p>205-213. </p><p>O'NEILL, E. S., DLUHY, N. M. and CHIN, E. M. 2005, Modelling novice clinical</p><p> reasoning for a computerized decision support system. Journal of Advanced</p><p>Nursing, 49, 68-77.</p><p>PATEL, V.L., and GROEN, G.J. 1986, Knowledge based solution strategies in</p><p> medical reasoning. Cognitive Science, 10, 91-116.</p><p>PATEL, V.L., KAUFMAN, D.R. and MAGDER, S. 1991, Causal reasoning about</p><p> complex physiological concepts by medical students. International Journal of</p><p>Science Education, 13 (2), 171-185. 17</p><p>PATEL, V., GROEN, G. J. and PATEL, Y. C. 1997, Cognitive Aspects of Clinical</p><p>Performance During Patient Workup: The Role of Medical Expertise. Advances in</p><p>Health Sciences Education, 2, 95-114.</p><p>RIKERS, R. M. J. P., SCHMIDT, H. G. and BOSHUIZEN, H. P. A., 2000,</p><p>Knowledge Encapsulation and the Intermediate Effect. Contemporary Educational</p><p>Psychology, 25, 150-166.</p><p>RIKERS, R. M. J. P., LOYENS, S. M. M. and SCHMIDT, H. G. 2004, The role of</p><p> encapsulated knowledge in clinical case representations of medical students and</p><p> family doctors Medical Education, 38, 1035-1043.</p><p>SCHAUBLE, L. 1996, The Development of Scientific Reasoning in Knowledge-</p><p>Rich Contexts. Developmental Psychology, 32, 102-119.</p><p>SHANTEAU, J. 1992, How much information does an expert use? Is it relevant?</p><p>Acta Psychologica, 81, 75-86.</p><p>SLOUTSKY, V. M. and YARLAS, A. S. 2000, Problem Representation in Experts</p><p> and Novices: Part 2. Underlying Processing Mechanisms. In L. R. Gleitman and A.</p><p>K. Joshi, (eds) Twenty Second Annual Conference of the Cognitive Science</p><p>Society. (Mahwah: NJ, Lawrence Erlbaum Associates). 18</p><p>Appendix: Seven Level Coding Scheme</p><p>Level Zero: Other</p><p>This category includes statements that contain features that cross coding</p><p> categories and are recorded as ambiguous. In addition, it includes hypotheses that</p><p> cannot be tested with the data available on the PATSy system e.g. speculation</p><p> about the patient’s lifestyle or the patient’s state of mind on a particular day.</p><p>Level One: Reading of data </p><p>This category includes statements that consist of reading aloud data without</p><p> making any comment or additional interpretation.</p><p>Level Two: Making a concrete observation </p><p>This category includes statements about a single piece of data which do not use</p><p> any professional terminology (i.e. they could be made by a lay person with no</p><p> domain specific knowledge). Statements at this level do make some level of</p><p> comment on, or interpretation of the data, beyond simply reading it aloud. 19</p><p>Level Three: Making a superordinate level clinical observation </p><p>This category contains statements which extrapolate to a higher level concept.</p><p>Alternatively, statements at this level may compare information to a norm or other</p><p> test result, e.g. “11 months behind but that doesn’t strike me as particularly</p><p> significant”. There is evidence of the use of some professional register rather than</p><p> lay terms. These statements can be differentiated from higher level statements</p><p> because they are not phrased in diagnostic certainty language. Similarly, they are</p><p> not couched in hypothesis language, i.e. they could not trigger a specific search</p><p> strategy though assessments or observations. They may make statements from</p><p> the data including words such as “seems”, but do not predict from the data.</p><p>Level Four: True hypothesis</p><p>There are a number of characteristics of level four statements. The crucial element</p><p> is the expression of a causal relationship between two factors. This may be</p><p> expressed by an explicit or implicit “if….then” structure. Statements at this level</p><p> may be phrased as a question directed at the data e.g. “are these results saying</p><p> autism?” They may be couched as a predictive statement that might trigger a</p><p> search/test strategy, e.g. “he could be dyspraxic”. As these statements can</p><p> function as triggers to search and evaluate further data, hypotheses that can’t be</p><p> tested by the tests and data available on PATSy are not counted, nor are</p><p> hypotheses which are too vague to follow up. Speculations on, for example,</p><p> medical conditions, are not included as a hypothesis (such statements should be</p><p> included in category zero, “Other”). 20</p><p>Statements in this category include at least some professional register. However,</p><p> they are not phrased in the language of diagnostic certainty. Statements with tag</p><p> questions should be carefully evaluated, as they have a social function and are</p><p> therefore not in themselves used as part of the coding process. For example, the</p><p> question “I think it may be autism, don’t you?” would be coded at level four</p><p> because of the predictive language “I think it may be”, not the tag question.</p><p>Level Five: Diagnostic statement</p><p>Statements in this category are phrased in the language of diagnostic certainty.</p><p>They may contain strong linguistic markers of certainty, such as “definitely”</p><p>“certain” or a statement such as “he’s got”, “it is”. They do not contain any</p><p> indicators of uncertainty such as questions, predictive language, vocabulary such</p><p> as “if, maybe, possibly” or statements such as “I think X”, “I reckon Y”. Statements</p><p> in this category consist of those which include or exclude a superordinate</p><p> diagnostic category. The granularity of the statement is such that it allows broad</p><p> exclusion/inclusion of diagnostic categories, e.g. language vs. speech disorder.</p><p>Statements in this category are likely to be found in a letter to another professional</p><p> rather than a speech and language therapist</p><p>Level Six: Diagnostic certainty</p><p>Statements in this category are phrased in the language of diagnostic certainty.</p><p>They may contain strong linguistic markers of certainty, such as “definitely”</p><p>“certain” or a statement such as “he’s got”, “it is”. They do not contain any</p><p> indicators of uncertainty such as questions, predictive language, vocabulary such 21</p><p> as “if, maybe, possibly”. Statements in this category consist of those which include</p><p> or exclude a superordinate diagnostic category. They use predominantly</p><p> appropriate professional register. Statements with tag questions eliciting</p><p> agreement with diagnostic certainty should be included in this category e.g. “It’s</p><p> autism, isn’t it?” They are likely to be used in a report to a speech and language</p><p> therapist, i.e. they use specific professional terminology. Statements at this level</p><p> have a finer granularity of description than level five statements. 22</p><p>Diagnostic level 0 1 2 3 4 5 6 Diagnostically 12.5 17.5 42.9 48.1 31.2 8.4 9.5 accurate students N=8 Diagnostically 12.9 26.1 61.2 50.9 33.4 6.9 1.7 inaccurate students N=9</p><p>Table 1. Average number of statements per dyad made at each level by diagnostically accurate and inaccurate students</p><p>Mean number of statements</p><p>70 60 s t n</p><p> e 50 m e t a t s</p><p>40</p><p> f Diagnostically accurate o</p><p> r Diagnostically inaccurate e b 30 m u n</p><p> n a</p><p> e 20 M 10 0 Level 0 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Statement Level 23</p><p>Figure 1. Mean number of statements per dyad made by diagnostically accurate and diagnostically inaccurate sub-groups of students over one hour</p><p>Diagnostic statement level percentage of total Participants 0 1 2 3 4 5 6 Expert Pair 0 0 5.26 60.52 13.15 7.89 13.15 DA pair C 4.16 0 12.5 41.66 20.83 12.5 8.33 DA pair F 8.33 0 4.16 54.16 12.5 4.16 16.66 DA Pair P 0 9.52 4.76 23.80 33.33 4.76 14.28 DI Pair E 3.70 7.40 7.40 37.03 40.74 0 3.70 DI Pair K 13.04 4.34 8.69 34.78 21.73 13.04 4.34 DI Pair M 6.45 29.03 25.80 25.80 12.90 0 0</p><p>Table 2. Numbers of statements at each level for experts, diagnostically accurate (DA) and diagnostically inaccurate (DI) pairs expressed as a percentage of the total for each pair for PATSy case DBL</p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us