<<

Introduction Chapter 10: Relationships Between Two Variables • Bivariate Analysis: A statistical method designed to detect and describe the 1. Constructing a Bivariate Table relationship between two nominal or ordinal variables (typically independent and 2. Elaboration dependent variables). – Spurious relationships – Intervening relationships • Cross-Tabulation: A technique for – Conditional Relationships analyzing the relationship between two or more nominal or ordinal variables. Allows for consideration of “control” variables.

Chapter 6 – 1 Chapter 6 – 2

Constructing a Bivariate Table Example of Bivariate Crosstabulation:

• Column variable: A variable whose Support for Abortion by Job Security categories are the columns of a bivariate (absolute numbers provided) table (from my experience it is usually the Job Security dependent variable). Can Find Can Not Find Job Easy Job Easy Row Total Support • Row variable: A variable whose categories for Yes 24 25 49 are the rows of a bivariate table (from my Abortion No 20 26 46 experience it is usually the independent Column Total 44 51 95 variable). What are the dependent and independent variables? What are the column and row variables? Marginals? • Marginals: The row and column totals in a What is the disadvantage of providing only absolute numbers? bivariate table. What is the advantage of providing percentages? Chapter 6 – 3 Chapter 6 – 4

Column Percentages Constructing a Bivariate Table: Effect of Job Security on Support for Abortion Percentages Can Be Computed in (absolute numbers in parentheses) Different Ways: Can Find Can Not Find Abortion Job Easy Job Easy Row Total Yes 55% 49% 52% 1. Column Percentages: column totals as base (24) (25) (49) 2. Row Percentages: row totals as base No 45% 51% 48% (20) (26) (46) We typically provide percentages for the independent variable Column Total 100% 100%

Chapter 6 – 5 Chapter 6 – 6 Questions to Answer When Examining a Bivariate Relationship Direction of the Relationship

1. What are the dependent and • Positive relationship: A relationship independent variable? between two variables (i.e., a bivariate relationship) measured at the ordinal level 2. Does there appear to be a relationship? or higher in which the variables vary in the (the chi square is usually used with a crosstabulation) same direction (both go up or both go down). 3. How strong is it? (There are “measures of association” that will indicate the strength of • Negative relationship: A bivariate the relationship. We will learn a few such as relationship measured at the ordinal level “lambda” and “gamma”) or higher in which the variables vary in opposite directions (when one goes up the 4. What is the direction of the relationship? other goes down). Chapter 6 – 7 Chapter 6 – 8

A Positive Relationship A Negative Relationship (as class goes up “health” goes up) (as “class” goes up “traumas” go down)

Chapter 6 – 9 Chapter 6 – 10

Elaboration More Examples • Elaboration is a process designed to further explore a bivariate relationship; it Which are likely to be positive relationships and which negative relationships? involves the introduction of a “control” variable (it’s the process of “elaborating” 1. The relationship between hrs. studying and grades on the relationship between two variables by considering a third variable). 2. The relationship between partying and grades

3. The relationship between “amount of sleep” and •A control variable is an additional variable grades considered in a bivariate relationship. This third variable is “controlled for” when we 4. The relationship between “color of shoes” and grades examine the relationship between two variables.

Chapter 6 – 11 Chapter 6 – 12 1. Testing for a spurious relationship

•ASpurious relationship is a relationship in which both the independent variable (IV) and the Elaboration Tests dependent variable (DV) are influenced by a third variable. The IV and DV are not causally linked, – Spurious relationships although it might appear so if one was unaware of – Intervening relationships the third variable. – Conditional Relationships • The relationship between the IV and DV is said to be “explained away” by the control variable.

•ADirect causal relationship is a relationship between two variables that cannot be accounted for (or explained away) by other variables. It is a “nonspurious” relationship.

Chapter 6 – 13 Chapter 6 – 14

Example of a Bivariate Relationship that appears to be spurious, prior to testing for spuriousness:

# of Firefighters and Property Damage

Number of firefighters at the scene of a fire (IV) +

Property Damage (DV)

Chapter 6 – 15 Chapter 6 – 16

A second example of a spurious relationship: Example of a third (control) variable causing a A relationship between two variables prior to “spurious” relationship: considering a third variable: (elaboration considers control variables) (that is, prior to elaboration)

Sale of Ice Cream (two variables Sale of Ice Cream + appear related) Outdoor + temperature + Number of Outdoor Number of Outdoor Crimes Crimes

Chapter 6 – 17 Chapter 6 – 18 In-Class Assignment: 2. Elaboration can test for an intervening relationship Write down an example of a spurious relationship (don’t confer with your • Intervening relationship: a relationship in which the control neighbor just do the best you can to variable intervenes between the think of one) independent and dependent variables. Identify the dependent and independent variable and the “control” • Intervening variable: a control variable that follows an independent variable that is causing the spurious variable but precedes the dependent relationship? variable in a causal sequence.

Chapter 6 – 19 Chapter 6 – 20

Intervening Relationship: Examination of two variables prior to Examination of an intervening variable considering a third “intervening” between two other variables variable

Attending Hours StudyingGrades Attending Grades Weekday Parties Week Day Parties (independent variable) (dependent variable) (IV) (Intervening Variable) (DV)

Chapter 6 – 21 Chapter 6 – 22

3. Elaboration tests for Conditional In-Class Assignment: Relationships • Conditional relationship: a relationship in What is another example of an which the independent variable’s effect on intervening relationship? the dependent variable depends on (or is conditioned by) a category of a control What is the dependent and independent variable. variable and what is the “control” variable that is intervening between • The relationship between the independent the two variables? and dependent variables will change according to the different conditions (or categories) of the control variable.

Chapter 6 – 23 Chapter 6 – 24 Example of a relationship between two Example of a Conditional Relationship variables prior to considering a “conditional” control variable + # of Dolls owned Hours spent Playing with Dolls # of Dolls owned Hours spent Playing with Dolls Gender of Child (two categories: male and female)

Chapter 6 – 25 Chapter 6 – 26

Another Example of a Conditional Another Example of a Conditional Relationships Relationships + Empowering the Job Satisfaction Employee Empowering the Job Satisfaction (letting him/her Employee make decisions) (letting him/her Maslow’s Hierarchy make decisions) of Need (categories from basic “physical” needs to “self actualization” and “esteem” needs)

Chapter 6 – 27 Chapter 6 – 28

Three Goals of Elaboration Maslow’s Hierarchy of Needs, 1943 1. Elaboration allows us to test for spurious relationships One’s level of need is a condition for 2. Elaboration clarifies the causal sequence how empowerment of bivariate relationships by introducing affects job variables hypothesized to intervene satisfaction. between the IV and DV. 3. Elaboration specifies the different conditions under which the original bivariate relationship might hold.

Chapter 6 – 30 Religion and Society (negative Chapter 11: effects) Test of Statistical Significance http://www.youtube.com/watch?v=Yx (such as t-test and Chi Square) TZv8c_GBM&feature=related and Measures of Association

Chapter 6 – 32

What is the Chi Square? A test of statistical significance, like the t-test and chi square, Just like the t-test, the gives us the probability that the Chi Square is a null hypothesis is correct. test of statistical significance providing the level of probability If the t value shows us that there that the null hypothesis is true. is a 5% (.05) or less chance that the null hypothesis is correct, we If the probability of it being true reject the null hypothesis and is less than 5% (.05), we will accept the research hypothesis. reject the null hypothesis and accept the research hypothesis.

Chapter 6 – 33 Chapter 6 – 34

What is the difference between a test of statistical difference and Why do we need a Chi Square measures of association? test when we have the t-test? 1. A test of statistical significance tests The t-test requires interval level whether we should reject the null hypothesis data. A chi square can be used and accept the research hypothesis. with ordinal or nominal level data. 2. Measures of association (such as lambda and gamma) measure the “strength” of the relationship”

Chapter 6 – 35 Chapter 6 – 36 Measures of Association: (such as Lambda and Gamma) Suppose we found that the examine the size or strength of the strength/size of the association between two variables in a association between two sample (not focused on whether or not variables was large, BUT, the the null-hypothesis can be rejected). test of statistical significance indicated that we cannot Typically, if the null hypothesis cannot reject the null hypothesis of be rejected (i.e., we assume there is no no association. statistical association between the two variables), then we ignore the How would we interpret these “strength” of the association found results? since whatever it is, we have determined it is due to error.

Chapter 6 – 37 Chapter 6 – 38

Answer: in most cases, if Chi Square is not How does Chi Square help us significant then we must assume that the two variables are not associated (we cannot reject determine the level of probability the null hypothesis of “no association”) even if that the null hypothesis is true? the measure of association (e.g., lambda, gamma) is large. That is, the probability that the association/relationship found If the two variables are not associated, then it between two variables in our doesn’t matter how “strong” the relationship is in our sample since the probability is sample is simply due to sampling sufficiently high that the two variables are not error and not an association found associated with one another in the population. in the population.

Chapter 6 – 39 Chapter 6 – 40

Answer: Chi Square compares the observed relationship, found in the sample, More Specifically: to a How does Chi Square Work? “table of no relationship.” First, it creates a table of “no That is, it creates a table displaying the 2 relationship”. This is done by, first, variables and calculates the numbers that creating a table that shows the two would be in each cell of the table if there variables and their margin totals found were no relationship and then compares from the sample (no numbers are this table to the table of actual data placed in the cells of the table yet). found from the sample.

If the values in the 2 tables are similar, Next, it uses the marginal totals to then there is a high probability that, determine what numbers will go in each whatever relationship is seen in the cell of the table. sample, is due to sampling error and not due to a real relationship in the population. Chapter 6 – 41 Chapter 6 – 42 Once this table is created, it compares this Table 11.1 below provides sample frequencies “table of no relationship” (taken from your book). What are the to the dependent and independent variables? What table displaying the actual data found from are the marginals? the sample In order to calculate a Chi Square for these The more similar the data, what would you guess is the next step table of no relationship is to the that should be taken? actual-data table the more likely that there is NO association between the two variables in the population.

That is, whatever relationship is found in the sample, and subsequently shown in the actual- data table, is likely due to sampling error and not a true reflection of the population. Chapter 6 – 43 Chapter 6 – 44

Table 11.1 provides actual frequencies from a sample.

The next step is to calculate a table of “no association”. That is, if this sample had been drawn and there was “no Table 11.3 provides the table of “no association”. That is, association” between the what one would expect to find if these two variables variables what would the table were not associated. look like?

Chapter 6 – 45 Chapter 6 – 46

We need to calculate fe = Expected Frequency if No Once we have created our second table of “no Association. association”, how do we calculate Chi-Square?

That is, the cell frequencies that would be expected in (fo -fe)2 a bivariate table if the two variables were unrelated X2 = (statistically independent) fe Where: For each cell in the table: fo = observed frequencies fe = expected frequencies if no association fe = (column marginal) ( row marginal) Total N Chi Square compares the observed and the expected frequencies and from this comparison provides the probability that the null hypothesis of no association should be accepted. Typically, alpha is set at .05. If there is a 5% or less probability that the null hypothesis is true, then we reject the null hypothesis.

Chapter 6 – 47 Chapter 6 – 48 Table 11.5 calculates chi square for our How do we interpret the Chi example using the chi square formula. Square Statistic?

That is, in our example what does the number, 57.99 ?

Answer: we use a Chi Square distribution table to locate 57.99 and it’s associated probability (Appendix D in text). Or, we observe computer results such as from SPSS.

Chapter 6 – 49 Chapter 6 – 50

Is Chi Square significant when it equals 57.99 with one degrees of freedom (DF)?

(see Chi Square distribution table)

Chapter 6 – 51 Chapter 6 – 52

An examination of the Chi Square Distribution Limitations of Chi Square: table, with a df of 1, shows us that: 1. Chi Square is sensitive to sample size. The the probability of obtaining a X2 of 57.99, when larger the sample size the larger the chi square. Consequently, the null hypothesis is the null hypothsis is true, is less than .001. more likely be rejected with a large sample.

That is, if there is “no association” between the 2. Chi Square is sensitive to small expected variables, then the chances of drawing a sample frequencies. Each cell should include at least with the degree of association found in our 5 cases to be sure that chi square is accurate. sample is less than 1 in 10,000 samples. 3. While Chi Square shows us statistical Therefore, we will reject the null-hypothesis of significance it does not give us information no difference and assume that the difference about the strength of the relationship or found in the sample is a difference existing in substantive significance. (This is left for the whole population (we accept the research measures of association and interpretation of the data) hypothesis).

Chapter 6 – 53 Chapter 6 – 54 Answer: typically, if the chi square is not significant, then the measure of association (e.g., lambda) should not be So, given the limitations, it is useful considered since we must accept the null hypothesis of no difference. to revisit our earlier question: However, because chi square is affected Does it make sense to report (or by the number of cases in the sample, if even examine) the measures of the sample is small, chi square is more association if the test of statistical likely to suggest no relationship between significance tells us that we should two variables (even if one exists). not reject the hypothesis of “no association”? Therefore, if one has a small sample it would be wise to examine the size of the measure of association (e.g., lambda) even if chi square is not significant.

Chapter 6 – 55 Chapter 6 – 56

Table 11.1 provides actual frequencies from a sample.

凩訝

Table 11.3 provides the table of “no association”. That is, (see you later) what one would expect to find if these two variables were not associated.

Chapter 6 – 57 Chapter 6 – 58

To read the table we need to know the degrees of freedom.

With cross-tabulation data we find the degrees of freedom by using the following formula:

df = (r – 1) (c – 1) Where: r = the number of rows c = the number of columns

What are the degrees of freedom in our bivariate (2 X 2) table?

Chapter 6 – 59