The Clinical Reasoning Skills of Speech and Language Therapy St

Total Page:16

File Type:pdf, Size:1020Kb

The Clinical Reasoning Skills of Speech and Language Therapy St

SPCON

Paper presented at the Royal College of Speech and Language Therapists

Conference "Realising the vision", University of Ulster, 10-12 May 2006

The Clinical Reasoning Skills of Speech and Language Therapy Students

Kirsty Hoben1, Rosemary Varley1, and Richard Cox2

1 Department of Human Communication Sciences, University of Sheffield,

Sheffield, UK

Tel: 0114 2222454

Fax: 0114 2730547

Email: [email protected]

2 Department of Informatics, School of Science and Technology, University of

Sussex, Falmer, Brighton, UK

Key words: clinical reasoning, students, speech and language therapy 2

What is already known on this subject: Studies in medicine and related fields

have shown that students have difficulties with both knowledge and strategy in

clinical reasoning tasks. To date, there has been little research in this area in

speech and language therapy.

What this study adds: Speech and language therapy students show clinical

reasoning difficulties similar to those observed in medicine and other related

domains. They could benefit from explicit teaching of strategies to improve their

clinical reasoning and consolidate their domain knowledge. 3

Abstract

Background Difficulties experienced by novices in clinical reasoning have been well

documented in many fields, especially medicine (Elstein et al., 1978; Patel and

Groen, 1986; Boshuizen and Schmidt, 1992, 2000; Rikers et al., 2004). These

studies have shown that novice clinicians have difficulties with both knowledge

and strategy in clinical reasoning tasks. Speech and language therapy students

must also learn to reason clinically, yet to date there is little evidence of how they

learn to do so.

Aims In this paper, we report the clinical reasoning difficulties of a group of speech and

language therapy students. We make a comparison with experienced speech and

language therapists’ reasoning and propose some methods and materials to aid

the development of clinical reasoning in speech and language therapy students.

Methods and procedures Student reasoning difficulties were analysed during assessment of unseen cases

on an electronic patient database, the Patient Assessment and Training System

(PATSy)1. Students were videoed as they completed a one hour assessment of

1 www.patsy.ac.uk 4

one of three “virtual patients”. One pair of experienced speech and language

therapists also completed an assessment of one of these cases under the same

conditions. Screen capture was used to record all on screen activity, along with an

electronic log of the tests and information accessed and comments entered by the

students and experienced therapists. These comments were analysed via a seven

level coding scheme that aimed to describe the events that occur in the process of

diagnostic reasoning.

Outcomes and Results Students displayed a range of competence in making an accurate diagnosis.

Diagnostically accurate students showed increased use of high level professional

register and a greater use of firm diagnostic statements. For the diagnostically

inaccurate students typical difficulties were a failure to interpret test results and

video observations, difficulty in carrying out a sequence of tests consistent with a

reasoning path, and problems in recalling and using theoretical knowledge.

Conclusions and Implications We discuss how identification of student reasoning difficulties can inform the

design of learning materials intended to address these difficulties.

Introduction

Previous studies have revealed that novices display a number of difficulties in

assessing a problem. Much of this research has been in the domain of medicine

and common difficulties that have been revealed are a tendency to focus on

surface features of problems (Sloutsky and Yarlas, 2000), and an inflexible

application of knowledge and strategies to the diagnostic problem (Mavis et al., 5

1998). Novices are less likely than experts to be aware of confounding information

(e.g., that a language assessment may tap multiple aspects of communication),

and are more likely to be data driven than theory driven in their planning. They

also tend to begin tasks without clear goals (Klahr, 2000). Furthermore, novices

are less able to evaluate their progress or results from a task (Hmelo-Silver et al.,

2002), and have difficulty modifying or abandoning hypotheses in the face of

contradictory evidence (Arocha and Patel, 1995). They can have difficulty

distinguishing relevant information in a problem (Shanteau, 1992; Cholowski and

Chan, 2001), and may not have well elaborated schemata of diagnoses and

patterns of presenting problems (Patel et al., 1997, 2000; Boshuizen and Schmidt,

2000). Novices can be slow in decision making and hypothesis generation (O'Neill

et al., 2005; Joseph and Patel, 1990), and may harbour misconceptions or over-

simplifications of domain-specific concepts which consequently affect

interpretation of results (Patel et al., 1991; Schauble, 1996).

McAllister & Rose (2000) acknowledge the relative paucity of research into

the processes of clinical reasoning in speech and language therapy. However,

there are similarities in the global characteristics of diagnostic reasoning across

related professions such as medicine, physiotherapy, occupational therapy and

nursing. It is likely, therefore, that speech and language therapy novices will

display similar reasoning difficulties to those observed in novices from other

clinical domains.

The current research examined speech and language therapy students’

developing clinical reasoning skills. Clinical reasoning involves both domain-

specific knowledge and reasoning (i.e., knowledge pertaining directly to speech

and language therapy) and domain-general reasoning (i.e., reasoning skills that

any person could be expected to have). The current research used an existing 6

database of speech and language therapy cases, the Patient Assessment and

Training System (PATSy) (Lum and Cox, 1998). The database consists of “virtual

patients”, and includes video clips, medical history, assessment results and links

to related publications. Students are able to “administer” tests to patients and keep

a log of their findings and conclusions.

Methods

Participants: The study recruited 34 masters level and undergraduate speech and

language therapy students (8/34 participants were masters level students) from

two UK universities via posters on notice boards and email. Undergraduate

students were in year three of their studies and masters level students were in

their second year of study. In addition, two experienced speech and language

therapists took part. University ethical approval was granted for the conduct of the

research and all usual ethical practices were observed.

Procedures: Students and experienced therapists worked in pairs (dyads pairings

were mostly self-selected). They were given one hour to undertake the diagnosis

of one of three pre-selected PATSy cases; DBL, RS, both acquired cases, or JS1,

a developmental case. The PATSy cases used for the study all exhibited a degree

of ambiguity in their clinical presentation i.e., their behavioural profile might be

consistent with a number of possible diagnoses of underlying impairment.

Participants were asked to produce a set of statements that described key

impairments shown by the case, and if possible, an overall diagnostic category.

The diagnostic process of the students was video-recorded and all participants

completed a learning log that is automatically generated and stored within PATSy. 7

A screen capture was also performed, allowing subsequent synchronised playback

of the video, audio and screen activity, using NITE tools2, developed at the

University of Edinburgh.

Analyses

Prior to the coding of data, student pairs were independently categorised as

diagnostically accurate (DA) or inaccurate (DI) based on whether they reached a

diagnosis that was at an appropriate level for a novice/newly qualified speech and

language therapist. DA students were those that were able to report key

impairments displayed by the case (e.g., type and extent of lexical retrieval deficit).

Similarly, the tests selected by a pair were evaluated to determine if the dyad was

using tests that were relevant to the behavioural impairments shown by the case

and to the comments they had made in dialogue and their written log. In addition,

test choices were examined for relatedness and movement along a diagnostic

path. For example, a test sequence involving a switch from a picture semantics

task such as Pyramids and Palm Trees (Howard and Patterson, 1992) to non–

word repetition in the context of discussion of written word comprehension was

classed as an unrelated sequence. The performance of a subset of student pairs

was compared to that of experienced clinicians diagnosing the aphasic case DBL.

The statements made by participants in dialogue with their partner and in the

written log were coded for particular statement types that might occur in diagnostic

reasoning. The coding scheme contained seven categories. (See Appendix One

for definitions and examples of each category).

2 http://www.ltg.ed.ac.uk/NITE/ 8

Level Zero: Other

This category included ambiguous statements and hypotheses that could not be

tested with the data available on the PATSy system.

Level One: Reading of data

This category included statements that consisted of reading aloud data without

making any comment or additional interpretation.

Level Two: Making a concrete observation

This category included statements about a single piece of data which did not use

any professional terminology

Level Three: Making a superordinate level clinical observation

This category contained descriptive statements which extrapolated to a higher

level concept,

Level Four: Hypothesis

This category included statements that expressed a predicted causal relationship

between two factors.

Level Five: General diagnostic statement

Statements in this category consisted of those which included or excluded a

superordinate diagnostic category and were of the type that might be used in a

report to another professional, rather than a speech and language therapist.

Level Six: Specific diagnostic statement

Statements in this category shared the characteristics of Level Five diagnostic

statements. However, statements at this level had a finer granularity of description

than Level Five statements and might be used in a report to another speech and

language therapist. 9

Intra-rater reliability was assessed on codings with a time interval of 4 months

between categorisations. A Kappa score of 0.970 was achieved, indicating highly

satisfactory intra-rater reliability. Inter-rater reliability was established by two raters

independently coding 30% of the dialogue data sample. One rater was blind to the

PATSy case, participants and site at which data was collected, although

occasionally the nature of the discussion about the cases, particularly the

paediatric case, made it impossible to be blind to the case. A Kappa score of

0.888 was achieved, indicating satisfactory inter-rater reliability.

Results

Eight pairs of students were categorised as being diagnostically accurate (DA).

The remaining nine pairs did not produce a diagnosis that was viewed as accurate

for a novice clinician. The difficulties displayed by the diagnostically inaccurate

sub-group were: a failure to interpret test results and video observations, difficulty

in carrying out a sequence of tests consistent with a reasoning path, and problems

in recalling and using theoretical knowledge.

Table 1 displays the average number of statements per dyad for each type

produced by the DA and DI subgroups. The data reveal some disparities between

the groups: the DI group had more statements at the lower, more descriptive

levels, but had fewer statements at Level Six.

INSERT TABLE 1 ABOUT HERE 10

The same data are displayed in a column graph in Figure 1. Student use of Level

Three, Four and Five, statements, that is, superordinate statements using

professional terminology, statements postulating relationships between two

variables, and general diagnostic statements appeared with similar frequency in

the two subgroups. The DA group produced more Level Six statements where the

diagnosis was expressed in professional terminology of fine granularity. This

suggested that this cohort could link the patterns of behaviour observed in the

patient case to highly specific domain knowledge i.e., these students could bridge

the gap between theory and practice.

INSERT FIGURE ONE ABOUT HERE

The subset of students (dyads N=6) who diagnosed case DBL were compared to

the experienced therapist pair who evaluated the same case. The results are

presented in Table Two.

INSERT TABLE TWO ABOUT HERE

Table Two shows that the experienced therapists did not make Level Zero or Level

One statements. They make very few Level Two statements but a greater number

of Level Three statements, compared to either of the student groups. Experienced

therapists also made a higher proportion of firm diagnostic statements at Levels

Five and Six, compared to either of the student groups. Student results on case 11

DBL conform to the general pattern observed across all PATSy cases. Again, DA

students made fewer Level One and Two statements and more Level Six

statements. The profile of the DA students was more similar to that of experienced

clinicians than that of the DI group.

Further qualitative analyses of student performance revealed a number of themes

indicative of problems in diagnostic reasoning. For example, some students

displayed few well elaborated schemata of diagnoses, leading to difficulties in

making sense of data:

“She was making errors, wasn’t she, in producing words but I haven’t really found

any pattern yet”.

“ It’s hard to know what to look at isn’t it? What means something and what

doesn’t”

“I’m not entirely sure why they’re (client’s responses) not very appropriate”

The high numbers of Level One and Two statements in the DI group reflect

problems in this area: the patient’s behaviours are noted, but the students have

difficulty interpreting their significance or relationship. Some students showed

difficulty in carrying out a sequence of tests consistent with a reasoning path, for

example one dyad chose the following test sequence at the beginning of their

assessment of the paediatric case JS1: a handwriting sample, a non-word reading

test followed by a word reading test and then a questionnaire on the client’s social

and academic functioning. They started with marginally relevant and relatively fine

grained tests before going on to look at the questionnaire. In this case, the

questionnaire gave useful background information about broad areas of difficulty

for the client. Evaluating this evidence would have been more useful at the

beginning of their assessment as it allows the clinician to “reduce the problem

space” in which they are working and to focus their diagnostic effort on areas that 12

are more likely to be crucial to the understanding of the case. No hypotheses or

specific clinical reasons for these tests were given by the students, indicating that

they were not using the tests to attempt to confirm or disconfirm a hypothesis

about the case they were diagnosing. Their approach was descriptive, rather than

theory or hypothesis-driven.

Discussion The many studies of clinical reasoning in other related domains provide

evidence that there may be common patterns of development from novice to

expert that the speech and language therapy profession can learn from and

contribute to as researchers. Empirically and theoretically supported resources

could be developed, such as “intelligent tutors”, using hypermedia support to allow

novice speech and language therapy students to learn in a “virtual” situation, thus

allowing them to be better prepared when interacting with real patients.

The analysis of the data presented here has led to a number of ideas for

enhancing students’ clinical reasoning, which offer potential for use as formative

assessment tools for educators, but also as self-assessment tools for students.

For example, making students aware of the types of statement described in the

coding scheme presented here could provide a structure for self monitoring and

assessment, enabling students to evaluate and develop their own reasoning skills.

A student making Level Two statements could use the descriptors in the coding

scheme to develop those types of statements into Level Three statements, for

example, from “Scores worse when words are involved” (Level Two) to “Worse at

accessing semantic information from written form” (Level Three). 13

A hypothesis can be defined as a predicted causal relationship between two

factors, with an explicit or implicit “if…then” structure. Clarifying this interpretation

of a hypothesis and the type of testing behaviour that it could trigger might help

students to develop testable, pertinent hypotheses that should in turn, make the

assessment process more efficient and complete. For example, from “Could it be

Asperger’s?” to “If the patient had Asperger Syndrome, we would expect to see

evidence of relatively intact language and cognitive abilities but difficulty in

communicating, social relationships, and imagination.”

A resource currently under development consists of a Test Description

Language Graphical Tool, which is a computer-based interactive tree-diagram of

the cognitive sub-processes associated with language comprehension and

production. Currently a stand-alone programme, this could be presented on the

web within PATSy, either with the sub-processes for a particular test already

highlighted or students could highlight on the diagram the processes they believed

to be probed by a particular test. If this helped students to become more aware of

the content of a test then it could facilitate theory and hypothesis-driven reasoning

during the assessment process.

Students could be prompted to make superordinate clinical observations and a

tentative hypothesis early in an assessment session. After making a hypothesis,

students could be prompted about a suitable test either before they had made a

choice, or immediately afterwards if they chose an inappropriate test for their

hypothesis. For students using PATSy, these prompts could take the form of a

video showing two students discussing a relevant topic. After a series of tests,

students could be prompted to attempt a firm diagnostic statement. Again, within

PATSy, video clip examples of students doing this could be offered concurrently. 14

McAllister and Rose (2000) promote and describe curriculum interventions

designed to make the clinical reasoning process conscious and explicit, without

separating it from domain knowledge. They claim that this helps students to

integrate knowledge and reasoning skills. Whilst this is not a universally shared

opinion (e.g., Doyle, 1995, 2000), the results described here indicate that students

at all levels may benefit from explicit teaching of strategies to improve their clinical

reasoning and consolidate their domain knowledge.

References:

AROCHA, J. F. and PATEL, V. L., 1995, Novice diagnostic reasoning in medicine:

Accounting for clinical evidence. Journal of the Learning Sciences, 4, 355-384.

BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 2000, The development of clinical

reasoning expertise. In J. Higgs, and M. Jones, (eds), Clinical Reasoning in the

Health Professions (Butterworth Heinemann, Edinburgh), pp.15-22.

BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 1992, On the role of biomedical

knowledge in clinical reasoning by experts, intermediates and novices. Cognitive

Science, 16, 153-184.

CHOLOWSKI, K. M. and CHAN, L. K. S. 2001, Prior knowledge in student and

experienced nurses' clinical problem solving. Australian Journal of Educational

and Developmental Psychology, 1, 10-21. 15

DOYLE, J. 1995, Issues in teaching clinical reasoning to students of speech and

hearing science. In J. Higgs, and M. Jones, (eds), Clinical Reasoning in the Health

Professions (Butterworth Heinemann, Edinburgh), pp. 224-234.

DOYLE, J. 2000, Teaching clinical reasoning to speech and hearing students. In J.

Higgs, and M. Jones, (eds), Clinical Reasoning in the Health Professions

(Butterworth-Heinemann, Edinburgh), pp. 230-235.

ELSTEIN, A.S., SHULMAN, L.S and SPRAFKA, S.A. 1978, Medical Problem

Solving: An Analysis of Clinical Reasoning. (Cambridge, MA: Harvard University

Press).

HMELO-SILVER, C., NAGARAJAN, A. and DAY, R. S. 2002, ‘‘It’s Harder than We

Thought It Would be”: A Comparative Case Study of Expert-Novice

Experimentation Strategies Science Education, 86, 219-243.

HOWARD, D. and PATTERSON, K. 1992, Pyramids and Palm Trees: a test of

semantic access from words and pictures. (Bury St Edmonds: Thames Valley Test

Company).

JOSEPH, G.-M. and PATEL, V. L. 1990, Domain knowledge and hypothesis

generation in diagnostic reasoning. Medical Decision Making, 10, 31-46.

KAHNEMAN, D., SLOVIC, P. and TVERSKY, A., 1982, Judgement under

uncertainty: Heuristics and biases, (New York: Cambridge University Press). 16

KLAHR, D., 2000, Exploring science The Cognition and development of Discovery

Processes, (Massachusetts: The MIT Press).

LUM, C. and COX, R. 1998, PATSy - A distributed multimedia assessment training

system International Journal of Communication Disorders, 33, 170-175.

MAVIS, B. E., LOVELL, K. L. and OGLE, K. S. 1998, Why Johnnie Can't Apply

Neuroscience: Testing Alternative Hypotheses Using Performance-Based

Assessment. Advances in Health Sciences Education, 3, 165-175.

MCALLISTER, L. and ROSE, M. (2000) In J. Higgs and M. Jones, (eds.) Clinical

Reasoning in the Health Professions, (Butterworth-Heinemann, Edinburgh), pp.

205-213.

O'NEILL, E. S., DLUHY, N. M. and CHIN, E. M. 2005, Modelling novice clinical

reasoning for a computerized decision support system. Journal of Advanced

Nursing, 49, 68-77.

PATEL, V.L., and GROEN, G.J. 1986, Knowledge based solution strategies in

medical reasoning. Cognitive Science, 10, 91-116.

PATEL, V.L., KAUFMAN, D.R. and MAGDER, S. 1991, Causal reasoning about

complex physiological concepts by medical students. International Journal of

Science Education, 13 (2), 171-185. 17

PATEL, V., GROEN, G. J. and PATEL, Y. C. 1997, Cognitive Aspects of Clinical

Performance During Patient Workup: The Role of Medical Expertise. Advances in

Health Sciences Education, 2, 95-114.

RIKERS, R. M. J. P., SCHMIDT, H. G. and BOSHUIZEN, H. P. A., 2000,

Knowledge Encapsulation and the Intermediate Effect. Contemporary Educational

Psychology, 25, 150-166.

RIKERS, R. M. J. P., LOYENS, S. M. M. and SCHMIDT, H. G. 2004, The role of

encapsulated knowledge in clinical case representations of medical students and

family doctors Medical Education, 38, 1035-1043.

SCHAUBLE, L. 1996, The Development of Scientific Reasoning in Knowledge-

Rich Contexts. Developmental Psychology, 32, 102-119.

SHANTEAU, J. 1992, How much information does an expert use? Is it relevant?

Acta Psychologica, 81, 75-86.

SLOUTSKY, V. M. and YARLAS, A. S. 2000, Problem Representation in Experts

and Novices: Part 2. Underlying Processing Mechanisms. In L. R. Gleitman and A.

K. Joshi, (eds) Twenty Second Annual Conference of the Cognitive Science

Society. (Mahwah: NJ, Lawrence Erlbaum Associates). 18

Appendix: Seven Level Coding Scheme

Level Zero: Other

This category includes statements that contain features that cross coding

categories and are recorded as ambiguous. In addition, it includes hypotheses that

cannot be tested with the data available on the PATSy system e.g. speculation

about the patient’s lifestyle or the patient’s state of mind on a particular day.

Level One: Reading of data

This category includes statements that consist of reading aloud data without

making any comment or additional interpretation.

Level Two: Making a concrete observation

This category includes statements about a single piece of data which do not use

any professional terminology (i.e. they could be made by a lay person with no

domain specific knowledge). Statements at this level do make some level of

comment on, or interpretation of the data, beyond simply reading it aloud. 19

Level Three: Making a superordinate level clinical observation

This category contains statements which extrapolate to a higher level concept.

Alternatively, statements at this level may compare information to a norm or other

test result, e.g. “11 months behind but that doesn’t strike me as particularly

significant”. There is evidence of the use of some professional register rather than

lay terms. These statements can be differentiated from higher level statements

because they are not phrased in diagnostic certainty language. Similarly, they are

not couched in hypothesis language, i.e. they could not trigger a specific search

strategy though assessments or observations. They may make statements from

the data including words such as “seems”, but do not predict from the data.

Level Four: True hypothesis

There are a number of characteristics of level four statements. The crucial element

is the expression of a causal relationship between two factors. This may be

expressed by an explicit or implicit “if….then” structure. Statements at this level

may be phrased as a question directed at the data e.g. “are these results saying

autism?” They may be couched as a predictive statement that might trigger a

search/test strategy, e.g. “he could be dyspraxic”. As these statements can

function as triggers to search and evaluate further data, hypotheses that can’t be

tested by the tests and data available on PATSy are not counted, nor are

hypotheses which are too vague to follow up. Speculations on, for example,

medical conditions, are not included as a hypothesis (such statements should be

included in category zero, “Other”). 20

Statements in this category include at least some professional register. However,

they are not phrased in the language of diagnostic certainty. Statements with tag

questions should be carefully evaluated, as they have a social function and are

therefore not in themselves used as part of the coding process. For example, the

question “I think it may be autism, don’t you?” would be coded at level four

because of the predictive language “I think it may be”, not the tag question.

Level Five: Diagnostic statement

Statements in this category are phrased in the language of diagnostic certainty.

They may contain strong linguistic markers of certainty, such as “definitely”

“certain” or a statement such as “he’s got”, “it is”. They do not contain any

indicators of uncertainty such as questions, predictive language, vocabulary such

as “if, maybe, possibly” or statements such as “I think X”, “I reckon Y”. Statements

in this category consist of those which include or exclude a superordinate

diagnostic category. The granularity of the statement is such that it allows broad

exclusion/inclusion of diagnostic categories, e.g. language vs. speech disorder.

Statements in this category are likely to be found in a letter to another professional

rather than a speech and language therapist

Level Six: Diagnostic certainty

Statements in this category are phrased in the language of diagnostic certainty.

They may contain strong linguistic markers of certainty, such as “definitely”

“certain” or a statement such as “he’s got”, “it is”. They do not contain any

indicators of uncertainty such as questions, predictive language, vocabulary such 21

as “if, maybe, possibly”. Statements in this category consist of those which include

or exclude a superordinate diagnostic category. They use predominantly

appropriate professional register. Statements with tag questions eliciting

agreement with diagnostic certainty should be included in this category e.g. “It’s

autism, isn’t it?” They are likely to be used in a report to a speech and language

therapist, i.e. they use specific professional terminology. Statements at this level

have a finer granularity of description than level five statements. 22

Diagnostic level 0 1 2 3 4 5 6 Diagnostically 12.5 17.5 42.9 48.1 31.2 8.4 9.5 accurate students N=8 Diagnostically 12.9 26.1 61.2 50.9 33.4 6.9 1.7 inaccurate students N=9

Table 1. Average number of statements per dyad made at each level by diagnostically accurate and inaccurate students

Mean number of statements

70 60 s t n

e 50 m e t a t s

40

f Diagnostically accurate o

r Diagnostically inaccurate e b 30 m u n

n a

e 20 M 10 0 Level 0 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Statement Level 23

Figure 1. Mean number of statements per dyad made by diagnostically accurate and diagnostically inaccurate sub-groups of students over one hour

Diagnostic statement level percentage of total Participants 0 1 2 3 4 5 6 Expert Pair 0 0 5.26 60.52 13.15 7.89 13.15 DA pair C 4.16 0 12.5 41.66 20.83 12.5 8.33 DA pair F 8.33 0 4.16 54.16 12.5 4.16 16.66 DA Pair P 0 9.52 4.76 23.80 33.33 4.76 14.28 DI Pair E 3.70 7.40 7.40 37.03 40.74 0 3.70 DI Pair K 13.04 4.34 8.69 34.78 21.73 13.04 4.34 DI Pair M 6.45 29.03 25.80 25.80 12.90 0 0

Table 2. Numbers of statements at each level for experts, diagnostically accurate (DA) and diagnostically inaccurate (DI) pairs expressed as a percentage of the total for each pair for PATSy case DBL

Recommended publications