<<

Healthcare Quality and Safety Commission

Scientific Symposium

October 2016

Designing

Professor Nigel Grigg

“My life is an I never had a chance to properly design." Diana Ballard Presentation overview

• Experience with experiments

• Process and system variation recap

• Investigating processes and systems

• Principles of experimentation and some experimental designs

• Case study

Experience with Experimentation

• Industry (doing)

– Aircraft industry: 2+ year parallel improvement projects aimed at optimising the integrity of superalloy turbines and other forged products

• Industry (training / advising)

– Nanotechnology startup company : 2+ year providing advice on DOE method and analysis (assays, process improvement)

– Food Industry: 2+ year M.Phil supervision (filling processes); six sigma training for process managers

– Medical device manufacture; banking (6 sigma training)

• Research (supervision and advisory roles)

– PhDs: Rice growing trials (Cambodia); Rice chewing trials (Thailand) Modelling the tongue, and mastication process (NZ) Optimising textile production processes (Sri Lanka) – Various Masters and Post Graduate Diploma projects

Inputs Process Outputs Suppliers Materials Customers / Information Stakeholders Customers Facilities Products staff and Services

‘Voice’ of the process feedback / quality data ‘Voice’ of the customer suppliers External

Customer requirements operation Operating System

Customer requirements operation

Customer requirements customers External

...Supply Chain / Value Stream… Inside the ‘black box’: the processing steps Process variation recap Process data

Inputs Process Outputs Performance measurement

Process measure exhibits variability

BUT

How much of it is actually the process?? Graphical prediction methods in health care – humble beginnings

A ‘’ used by Dr. Carl Wunderlich in 1870, to study nonrandom oral temperature patterns of a pneumonia patients

Ring, E. F. J. (2007), The historical development of temperature measurement in medicine, Infrared Physics & Technology, 49(3), 297-301 Sources of process variation include...

1. Process

– Common cause

– Special cause

2. Measurement system

– People

– Devices

– Methods

– Metrics

– Analysis

– (even) the interpretation of analysis! Processes can be...

• Stable (in control)

– Common cause variation only present

– Variation is therefore stable

– Future variation can be therefore predicted (within certain limits)

• Unstable (out of control)

– Both common and special cause variation present

– Variation is therefore unpredictable

Q. Which type of variation is easier to eliminate? So how do we fix the thorny problem of common cause variation?

Study the process Test theories Develop knowledge Make changes Classification of research studies

Enumerative studies Analytic studies

Population-based Process-based

• System conditions are essentially • System conditions are varying static over time over time • Purpose is parameter estimation • Purpose is process • Analytical methods are statistical, improvement (hypothesis Tests, CIs, • Analytical Methods are probabilities) graphical (time-based) Overarching cycles

Enumerative studies Analytic studies (traditional science) (‘quality science’)

Theory Plan

Empirical generalisation Hypotheses Act Do

Experimental Study observations

Both are ‘probes for knowledge’, involving testing interventions by manipulating variables and observing effect upon other variables. Improvement Cycle

• Define problem • Carry out the study or experimental design • Define Objectives • Observe the stability of • Evaluate current conditions and other knowledge sources of variation • Develop hypotheses during study • Make predictions Plan Do • Design study

Act Study

• Decide whether or not • Analyse the data to make the system change • Compare the results to the predictions • Compare to current knowledge

Nothing is quite as practical as a good theory Other theories… The Interplay between theory and data

“Experience teaches us nothing without theory, but theory without experience is mere intellectual play”

Immanuel Kant

“without theory it is impossible to make sense of empirically generated data”. Voss et al (2002)

Campbell & Stanley classification of research designs

• Pre-experimental

– Clinical case study

– Pretest-posttest design (before-after study)

– Static-group comparison • Experimental

– Randomised controlled trials • Quasi-experimental

– Experiments lacking the random allocation

Campbell DT, Stanley JC & Gage NL (1966), Experimental and Quasi-Experimental Designs for Research, Chicago: Rand McNally College Pub Co Classifications of Quality Improvement study designs

• Observational

• Retrospective

• Pre-experimental

• Quasi-experimental

• Experimental (e.g. factorial designs)

• Time-series

Speroff T. (2004), Study Designs for PDSA Quality Improvement Research Quality Management in Health Care, 13(1), pp.17-32 Example: Patient falls (observational)

Problem

• The objective of the study was to provide a methodology to analyse the incidence of inpatient falls in elder health wards of the hospital.

• Prior to the study, the incidence of patient falls was difficult to determine due to the complex of patient factors and environmental factors

• For this reason there was no proactive interventional procedure in place to reduce the incidence of patient falls.

Jayamaha, N. P. & Grigg, N. P. (2009). Monitoring and improving the operational performance of a New Zealand healthcare organisation: a study on patient falls. Proceedings of the 7th ANZAM Operations, Supply Chain and Services Management Symposium, pp.238-250 T e s Falls per 1000 bed-days t s 1 1 2 2

0 5 0 5 0 5 p e r

f 1 o F r a m 2 l l e s

d 3

r w

4 a i t t h 5 e

u a n

6 t e

t q h

u 7 e a l

8 E s a l d

m 9 e p

10 r l e

H

2 s 11 e i W z a e

K 12 s l t

h ( P 13 b

e e U r d 14 i n - o d i d

15 t a y ( F s 16

i c n h 17 a a n n

g 18 c e i a 19 f l r o 20 Y m e

a p 21 e r : r

22 i 2 o d 23 0

0 t o 7

24 - p 2 e 0

r 25 i 0 o 8 d 26 L U _ U ) ) C = C L 1 L = 1 = 0 . 2 6 3 2 . 4 9 Example: Cause and Effect Diagram for High Incidence of Inpatient Falls Outcomes of the study

• Development of methodology for monitoring patient falls

• Searching the root causes for high incidence of falls

• Appreciation that there is a wide gap between existing state of the process and the desired state of the process

• Experimentation with high-tech infrared patient movement monitors fitted to the beds of high risk patients

• Monitored the drop in average incidence rates to develop new control limits What is (DoE)?

‘A collection of methods and a strategy to make a change to a product or process and observe the effect of that change on one of more quality characteristics with the purpose of helping experimenters gain the most information with the resources available’.

(Moen, Nolan and Provost, ‘Quality improvement through planned experimentation’ (2nd ed). p405 A few prominent industrial statisticians / experimenters

1900 1920 1940 1960

A few key contemporary (and not-so- contemporary) authors Davis Balestracci Ronald Moen 1999 Thomas Nolan Lloyd Provost George Box 1991 Stuart Hunter William Hunter 1978 Douglas Montgomery 1976

R.A. Fisher 1935 Experimental objectives

Controllable factors x1 x2 xp . . . Output inputs y . . .

z1 z2 zq Uncontrollable factors Experimental objectives

x1 x2 xp . . .

y . . .

1. Where to set the input factors (x’s) so that output (y) is nearer to its designed target value Experimental objectives

x1 x2 xp . . .

y . . .

2. Where to set the input factors (x’s) so that variability in y is reduced Experimental objectives

3. Where to set the input factors (x’s) so that the influence of the uncontrollable variables (z’s) is minimised (robustisation)

. . .

y . . .

z1 z2 zq Experimental objectives

. . .

y . . .

4. To simply determine which variables are significantly influential on the response, y KISS (Keep it simple and sequential) Assessing current knowledge

38 Some common designs

...and necessary key terms… Response variable

• The measured outcome(s) of each experimental trial (yi) • Key questions

– Do you know the measurement error?

– Do you know the measurement variability

– Can you measure it?

– What kind of value results?

– How can / should those data be analysed?

Q. What happens if the response variable’s measurement error is excessive? measurement system

Does it exhibit necessary... • Stability (of ) • Lack of Bias • Consistency (of variation) • Linearity • Discrimination

And how do you know??

(Also, beware of Likert scales)

2 Likert score distributions, each with n = 1100 observations

350 450 400 300 350 250 300 200 250 150 200 150 100 100 50 50 0 0 1 2 3 4 5 1 2 3 4 5

Value Freq Value Freq 1 300 1 100 2 200 2 250 3 100 Equivalent? 3 400 4 200 4 250 5 300 5 100 Count 1100 Count 1100 Mean 3 Mean 3 Example: Recent measurement project (cold chain consistency for HIV testing kits)

• Complexity of global supply chains can result in undesirable consequences

• Common for NZ healthcare providers to procure pharmaceutical products and laboratory products from as far as the USA and UK.

• Cold chain products are highly impacted by supply chain inefficiency or ineffectiveness.

• Problem: The lab manager (customer) wanted the kits to be delivered at the specified temperature: 20C – 80C

• Proposition: Laboratory cold chain products do not receive the same treatment as pharmaceutical cold chain products, even though the quality of the former (e.g. HIV AIDS test kits, reagents) can have as significant an effect on the patient as the quality of the latter.

Dixon,J.; Jayamaha, N.P.; Grigg, N. (2014) Laboratory Cold Chain Quality Performance - An Exploratory Study; 12th ANZAM Operations, Supply Chain, and Services Management Symposium, The University of Auckland, Auckland, New Zealand, 03 Jul 2014 - 04 Jul 2014. Recent measurement project: cold chain consistency for HIV testing kits

Systems under investigation Research questions and method

• RQ1: Does the delivery process remain stable and predictable?

• RQ2: Is the delivery process capable of meeting the requirements placed upon it?

• RQ3: If the capability of the process needs to improved, what short term interventions could be put in place to improve process capability?

• 5 logistics companies approached and agreed to take part in a study.

• Accurate real-time data loggers were placed in the final delivery stage of the supply chain, either individually separately or along with expired kits.

• Participating companies were aware of the experiment, and were guaranteed anonymity

• No ethics application was made, as no change was being made to the DHB practices (exploratory study) The predictability (variability) and capability of the cold chain system Analysis of data

• The first trip (# 12) that implied instability (out of control) was from a company that encased the logger in an ice slurry which is not the usual method of transporting cold chain products! Consequently, this was treated as an assignable cause.

• It is probable that because the shipment was being monitored with a logger the company decided to ensure it remained between 2-80C. Unfortunately, the temperature reached well below 2oC, and if this product had been a vaccine, it would have been rendered inactive!

• The second trip that implied instability was found to be the result of product being stored overnight at the courier company's depot at ambient temperature. The trip took place between 30-31 January 2013 when the ambient temperature averaged a high of 26oC and a low of 16.5oC. The trip from "supplier to Inward goods" took a total of 18 hours.

One possible theory for the variation? Factor

• Variable being deliberately varied in the experiment (xi)

• Controllable input variable hypothesised as potentially influencing the response variable(s)

• Key questions

• Why is it being included - theory or internal politics?

• How accurately can you control its levels?

• How wide apart can / should the levels be set?

• Are the chosen levels ‘realistic’ for normal process operation? Background variable

• An influential variable that cannot easily be varied or controlled under normal conditions.

• Not one that you are primarily interested in studying

• It’s influence needs only to be measured and known

• If uncontrolled, it normally manifests as systematic variation of some kind in the response variable.

Nuisance variable

• Unknown variable that affects the response randomly • “lurking” or extraneous • Manifests as random variation (noise) in the response variable Interaction

• When the effect of a factor on the response variable is not independent to that of another factor

• For example, the level of factor A affects the magnitude or direction of the effect that factor B has on the response

• This is a central concept in DoE. Its existence is a key reason for using factorial experiments. Useful experimental designs for analytic studies

• Basic comparisons – 1 factor; 2 levels (before / after; group A / group B)

• Completely Randomised Design (CRD) – 1 factor; ≥2 levels

• Randomised Block Design [RBD] – 1 factor (≥ 2 levels); 1 background factor (≥ 2 levels)

Designs – 1 factor (≥ 2 levels); ≥ 1 background factors

• Full factorial, Fractional factorial – ≥ 1 factor; ≥ 2+ levels each factor; background factors (optional), interactions suspected

• Central Composite Designs – Factorial designs with centrepoints AND axial points. Use for suspected non-linearity of response 1 Factor Completely Randomised Design

Factor – Type of drug (A, B or C) (i) Randomised Run Order

Factor: Type of hypertensive drug

Trial A B C Replicate 1 (3) (6) (9) 2 (7) (1) (2) 3 (4) (5) (8) 1 Factor Randomised Block Design

Factor – Type of drug (A, B or C) Background variable – condition of patient (I), (II) or (III)

Factor Blocks

Type of Patient with Diabetic patient Patient with hypertension with heart disease hypertensive (I) hypertension and hypertension drug (II) (III) A (2) (5) (9) B (3) (4) (7) C (1) (6) (8) 2 Factor Full Factorial Design and ‘Cube’

Factors A and B Levels – and + (low and high) Example 3 Factor Factorial Design and ‘Cube’

Factors A, B and C Levels – and + (low and high) Central Composite Design

From the following case study (hence x3 and x4 used) Designs for the current level of knowledge Case study of experimentation in health care

D A I C M

DEFINE ANALYSE IMPROVE Control MEASURE Performance Step 1 - Process flowchart

= a ‘model’ of the system under investigation Step 2: Identification of Metric • The discharge time for the insured patient started when the specialist signed the discharge order and ended when the file reached the accounting department.

Step 3: Collection of Observations • This stage was used to collect the real-life data on the discharge time for the insured patient. • These data points were collected for a period of 5 weeks. Step 3 Data Step 4 – Statistical Analysis

Removal of outliers

Nbefore = 31 Nafter = 27 Distribution fitting

Data are normally Distributed Assessment of state of control of the process

Exclusion of points: Nbefore = 27, Nafter = 24

Special causes Were found for All OOC points Determination of process sigma level

푋 − 푋 푍 = 푠 50 − 58.15 푍 = 21.19

푍푙표푛𝑔 푡푒푟푚 =-0.385

⇒ 푍푠ℎ표푟푡 푡푒푟푚= 1.115

=> ppm = 650,000

So 65% of discharges predicted to exceed upper spec limit Simulation model developed Brainstorming: reasons for long discharge time

= a cause- effect model of the system Analysis of the impact of main factors on discharge times Design of Experiments Factors and levels

(simulations were used to experiment on the process) Data Significant effects and interactions Steepest ascent trials Central Composite Design Central composite design (CCD) New data Contour Plot – shows the ‘response surface’ for the response variable under investigation

Go in this direction (as far as practicable) Before / after comparison Actual data: NOT simulated!

SQL: 1.11 SQL: 2.53