<<

Questionnaire and reliability

Department of Social and Preventive Medicine, Faculty of Medicine Outlines

Introduction

What is validity and reliability?

Types of validity and reliability. How do you measure them?

Types of Sampling Methods

Sample size calculation

G-Power ( Power Analysis)

• The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions

• In the broadest sense of the word, the research includes gathering of data in order to generate information and establish the facts for the advancement of knowledge. ● Step I: Define the research problem

● Step 2: Developing a research plan & research Design

● Step 3: Define the Variables & Instrument (validity & Reliability)

● Step 4: Sampling & Collecting data

● Step 5: Analysing data

● Step 6: Presenting the findings A is

• A technique for collecting data in which a respondent provides answers to a series of questions. • The vehicle used to pose the questions that the researcher wants respondents to answer.

• The validity of the results depends on the quality of these instruments. • Good are difficult to construct; • Bad questionnaires are difficult to analyze. •Identify the goal of your questionnaire

•What kind of information do you want to gather with your questionnaire?

• What is your main objective?

• Is a questionnaire the best way to go about collecting this information?

Ch 11 6 How To Obtain Valid Information

• Ask purposeful questions • Ask concrete questions • Use time periods based on importance of the questions • Use conventional language • Use complete sentences • Avoid abbreviations • Use shorter questions Validity and Reliability

• Validity: How well does the measure or design do what it purports to do? • Reliability: How consistent or stable is the instrument? –Is the instrument dependable? Which device can measure the body temperature ? 1 37.2 37.2 2 37.2 39.3 3 37.1 34.4 4 37.2 28.2 5 37.2 43.3 6 37.2 37 7 37.2 39

Validity

Logical Statistical

Construct

Face ContenContent Convergent Divergent/ Concurren Predictive t Discriminant t

Reliability Consistency Objectivity The Major Decisions in Questionnaire Design 1. Content - What should be asked? 2. Wording - How should each question be phrased? 3. Sequence - In what order should the questions be presented? 4. Layout - What layout will best serve the research objectives? Face Validity – Face validity is the extent to which a test is subjectively viewed as covering the concept it purports to measure – As a check on face validity, test/survey items are sent to experts to obtain suggestions for modification. – Because of its vagueness and subjectivity, psychometricians have abandoned this concept for a long time. Content Validity

– In , content validity (also known as logical validity) refers to the extent to which a measure represents all facets of a given construct. – Face validity Vs Content validity: • Face validity can be established by one person • Content validity should be checked by a panel, and thus usually it goes hand in hand with inter-rater reliability (Kappa!) The Content Validity Index Content validity has been defined as follows:

• (1) ‘‘...the degree to which an instrument has an appropriate sample of items for the construct being measured’’ (Polit & Beck, 2004, p. 423);

• (2) ‘‘...whether or not the items sampled for inclusion on the tool adequately represent the domain of content addressed by the instrument’’ (Waltz, Strickland, & Lenz, 2005, p. 155);

• (3) ‘‘...the extent to which an instrument adequately samples the research domain of interest when attempting to measure phenomena’’ (Wynd, Schmidt, & Schaefer, 2003, p. 509). Two types of CVIs.

• content validity of individual items • content validity of the overall .

• Researchers use I-CVI information to guide them in revising, deleting, or substituting items

• I-CVIs tend only to be reported in methodological studies that focus on descriptions of the content validation process

• Most often reported in scale development studies is the CVI S-CVI/UA Proportion of items on a scale that S-CVI achieves a relevance rating of 3 Content Validity of or 4 by all the experts CVI the overall scale Degree to which an instrument has an appropriate sample of items S-CVI/Ave for construct being I-CVI Average of the I- measured CVIs for Content Validity of individual items: Are the items representati Are the items Has each item relevance to Are the items in the ve of concepts Question clarity in term comments instruments concepts related to the of wording consistency? related to the dissertation dissertation topic? topic?

Q1 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q2 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q3 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q4 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q5 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④

Ratings 1= not relevant 2 =somewhat relevant. 3= quite relevant 4= highly relevant. I-CVI, item-level content validity index S-CVI, content validity index for the scale

Acceptable standard for the S-CVI recommended a minimum S-CVI of .80. Content validity index The I-CVI expresses the proportion of agreement on the relevancy of each item

If the I-CVI is higher than 79%, the item will be appropriate. If it is between 70% and 79%, it needs revision. If it is less than 70% it is eliminated Content validity ratio

푵 풏풆− ퟐ .푪푽푹 = 푵 ퟐ In this formula:

• ne = The number of specialists who have chosen the " Necessary " option • N = total number of assessors. Kappa statistic is a consensus index of inter-rater agreement that adjusts for chance agreement and is an important supplement to CVI because Kappa provides information about the degree of agreement beyond chance

Evaluation criteria for Kappa is the values above 0.74= excellent between 0.60 and 0.74=good between 0.40 and 0.59= fair To calculate modified kappa statistic, the probability of chance agreement was first calculated for each item by following formula:

N PC= [N! /A! (N -A)!]* . 5 .

• N= number of experts in a panel and • A= number of panelists who agree that the item is relevant.

After calculating I-CVI for all instrument items, finally, kappa was computed by entering the numerical values of probability of chance agreement (PC) and content validity index of each item (I-CVI) in following formula:

K= (I-CVI - PC) / (1- PC).

Validity

Logical Statistical

Construct Criterion

Face ContenContent Convergent Divergent/ Concurren Predictive t Discriminant t

Reliability Consistency Objectivity • This type of validity is used to measure the ability of an instrument to predict future outcomes. • Validity is usually determined by comparing two instruments ability to predict a similar outcome with a single variable being measured. • There are two major types of criterion validity: – predictive – concurrent Criterion validity

• A test has high criterion validity if – It correlates highly with some external benchmark (concurrent) ?

– How well does the test correlated with outcome criteria (predictive)?

30 Concurrent Validity • Concurrent validity is used when the two instruments are used to measure the same event at the same time. • It correlates highly with some external benchmark (concurrent) • Example: • DASS -> measuring depression • New instrument -> measuring depression

Predictive Validity • Predictive validity is used when the instrument is administered then time is allowed to pass and is measured against the another outcome.

• Regression analysis can be applied to establish criterion validity. • An independent variable could be used as a predictor variable and a dependent variable, the criterion variable. • The correlation coefficient between them is called validity coefficients.

How is Criterion Validity Measured?

• The statistical measure or correlation coefficient tells the degree to which the instrument is valid based on the measured criteria. • What does it look like in an equation? – The symbol “r” denotes the correlation coefficient. – A higher “r” value shows a positive relationship between the instruments. – A mix of high and low “r” values shows a negative relationship. • As a rule of thumb, for absolute value of r: • 0.00-0.19: very weak • 0.20-0.39: weak • 0.40-0.59: moderate • 0.60-0.79: strong • 0.80-1.00: very strong. Predictive Validity Concurrent Validity Validity

Logical Statistical

Construct Criterion

Face ContenContent Convergent Divergent/ Concurren Predictive t Discriminant t

Reliability Consistency Objectivity

• Measuring things that are in our theory of a domain. • The construct is sometimes called a – You can’t directly observe the construct – You can only measure its surface manifestations – it is concerned with abstract and theoretical construct, construct validity is also known as theoretical validity

39 What are Latent Variables?

• Most/all variables in the social world are not directly observable. • This makes them ‘latent’ or hypothetical constructs. • We measure latent variables with observable indicators, e.g. questionnaire items. • We can think of the variance of an observable indicator as being partially caused by: – The latent construct in question – Other factors (error) • I cringe when I have to go to math class. • I am uneasy about going to the board in a math Math class. • I am afraid to ask questions in math class. anxiety • I am always worried about being called on in math class. • I understand math now, but I worry that it's going to get really difficult soon. • Specifying formative versus reflective constructs is a critical preliminary step prior to further statistical analysis. Specification follows these guidelines: • Formative – Direction of is from measure to construct – No reason to expect the measures are correlated – Indicators are not interchangeable • Reflective – Direction of causality is from construct to measure – Measures expected to be correlated – Indicators are interchangeable – An example of formative versus reflective constructs is given in the figure below.

Construct, dimension, subscale, factor, component

• This construct has eight dimensions (e.g. Intelligence has eight aspects) • This scale has eight subscales (e.g. the survey measures different but weakly related things) • The factor structure has eight factors/components (e.g. in /PCA) Exploratory Factor Analysis

• (EFA) is a statistical approach to determining the correlation among the variables in a dataset.

• This type of analysis provides a factor structure (a grouping of variables based on strong correlations).

• EFA is good for detecting "misfit" variables. In general, an EFA prepares the variables to be used for cleaner structural equation modeling. An EFA should always be conducted for new datasets. . The Kaiser-Meyer-Olkin measure of sampling adequacy tests whether the partial correlations among variables are small. KMO Statistics Marvelous: .90s Meritorious: .80s Middling: .70s Mediocre: .60s Miserable: .50s Unacceptable: <.50 Bartlett’s Test of Sphericity

• Tests hypothesis that correlation matrix is an identity matrix. • Diagonals are ones • Off-diagonals are zeros • A significant result (Sig. < 0.05) indicates matrix is not an identity matrix; i.e., the variables do relate to one another enough to run a meaningful EFA.

Communalities

• A communality is the extent to which an item correlates with all other items.

• Higher communalities are better. If communalities for a particular variable are low (between 0.0-0.4), then that variable will struggle to load significantly on any factor.

• In the table below, you should identify low values in the "Extraction" column. Low values indicate candidates for removal after you examine the pattern matrix. Parallel analysis

• is a method for determining the number of components or factors to retain from pca or factor analysis. Essentially, the program works by creating a random dataset with the same numbers of observations and variables as the original data. https://www.statstodo.com

Factor analysis for dichotomous variables

• Using Factor software and simultaneously Parallel analysis for binary data Establishing construct validity

– Agrees with other measures of the same thing • Divergent/Discriminant validity – Different tests measure different things – Does the test have the “ability to discriminate”? – (Campbell & Fiske, 1959)

61 Construct validity Construct validity is the extent to which a set of measured items actually reflected the theoretical latent construct those item are designed to measure. Thus, it deals with the accuracy of measurement.

Construct validity is made up of TWO important components which they are:

1) Convergent validity: the items that are indicators of a specific construct should converge or share a high proportion of variance in common, known as convergent validity. The ways to estimate the relative amount of convergent validity among item measures: Discriminant Validity: the extent to which a construct is truly distinct frame other construct. To test the discriminant validity the AVE for two factors should be grater than the square of the correlation between the two factors to provide evidence of discriminant validity. • Discriminant validity can be tested by examining the AVE for each construct against squared correlations (shared variance) between the construct and all other constructs in the model.

• A construct will have adequate discriminant validity if the AVE exceeds the squared correlation among the constructs (Fornell & Larcker, 1981; Hair et al., 2006). Factor Loading:

at a minimum, all factor loading should be statistically significant. A good rule of thumb is that standardized loading estimates should be .5 or higher, and ideally .7 or higher. Average Variance Extracted (AVE): is the average squared factor loading. A VE of 0.5 or higher is a good rule of thumb suggesting adequate convergence. A VE less than .5 indicates that on average, more error remains in the items than variance explained by the latent factor structure impose on the measure (Haire et al., 2006, p 777). Construct Reliability: construct reliability should be .7 or higher to indicate adequate convergence or internal consistency. Individual model (First order CFA) Mea surement Model Structural Equation Modeling (SEM)

Individual Measurement Structural Model Model Model

Developing Assessments

What are you trying to measure? Purpose

What assessments do you already have that Review purport to measure this?

If necessary, consider commercial assessments or create a new Purchase Develop assessment Considerations

• Using what already have – Is it carefully aligned to your purpose? – Is it carefully matched to your purpose? – Do you have the funds (for assessment, equipment, training)? • Developing a new assessment – Do you have the in-house content knowledge? – Do you have the in-house assessment knowledge? – Does your team have time for development? – Does your team have the knowledge and time needed for proper scoring? – Identify the goal of your questionnaire. – What kind of information do you want to gather with your questionnaire? – What is your main objective? – Is a questionnaire the best way to go about collecting this information? Adopting an Instrument Adapting an Instrument

• Adopting an instrument is • Adapting an instrument quite simple and requires requires more substantial very little effort. Even when changes than adopting an an instrument is adopted, instrument. In this situation, the researcher follows the though, there still might be general design of another a few modifications that instrument but adds items, are necessary removes items, and/or substantially changes the content of each item • Whenever possible, it is best for an instrument to be adopted. • When this is not possible, the next best option is to adapt an instrument. • However, if there are no other instruments available, then the last option is to develop an instrument. STEP Type of Validity Development Adaption Adoption

Face + +/- +/- pretest Logical +/- Content + +

+ + Concurrent - Criterion + + Pilot / Predictive - main study Convergent + + +

Construct Divergent + + + Reliability Types of Reliability

• Test-Retest Reliability: Degree of temporal stability of the instrument. – Assessed by having instrument completed by same people during two different time periods. • Alternate-Forms Reliability: Degree of relatedness of different forms of test. – Used to minimize inflated reliability correlations due to familiarity with test items. Types of Reliability (cont.)

• Internal-Consistency Reliability: Overall degree of relatedness of all test items or raters. – Also called reliability of components. • Item-to-Item Reliability: The reliability of any single item on average. • Judge-to-Judge Reliability: The reliability of any single judge on average. • Cronbach’s alpha to evaluate the internal consistency of observed items, and also applies factor analysis to extract latent constructs from these consistent observed variables. • >0.90, means the questions are asking the same things • 0.7 to 0.9 is the acceptable range.

Remember!

An assessment that is highly reliable is not necessarily valid. However, for an assessment to be valid, it must also be reliable. Improving Validity & Reliability

• Ensure questions are based on standards • Ask purposeful questions • Ask concrete questions • Use time periods based on importance of the questions • Use conventional language • Use complete sentences • Avoid abbreviations • Use shorter questions Overall Cronbach Coefficient Alpha

• One may argue that when a high Cronbach Alpha indicates a high degree of internal consistency, the test or the survey must be uni-dimensional rather than multi- dimensional. Thus, there is no need to further investigate its subscales. This is a common misconception. Performing the Pilot test

Cronbach's alpha Measures the • A pilot test involves conducting intercorrelations among test items, and is the survey on a small, representative set of thus known as an internal respondents in order to reveal consistency estimate of reliability of test questionnaire errors before the scores survey is launched.

• It is important to run the pilot Test-retest reliability refers to the degree test on respondents that are to which test results are consistent over representative of the target time. In order to measure test-retest population to be studied. reliability, we must first give the same test to the same individuals on two occasions and correlate the scores

Ch 11 89 Thanks for your Attention !! Thank you

Dr Mahmoud Danaee ) [email protected]

Senior Visiting Research Fellow , Department of Social and Preventive Medicine Faculty of Medicine