1 Standard Scores Richard S. Balkin, Ph.D., LPC-S, NCC 2 Normal Distributions While Best and Kahn (2003) indicated that the normal curve does not actually exist, measures of populations tend to demonstrate this distribution It is based on probability—the chance of certain events occurring 3 Normal Curve The curve is symmetrical 50% of scores are above the , 50% below The mean, , and have the same value Scores cluster around the center 4

 "68-95-99" rule  One standard away from the mean in either direction (red) includes about 68% of the data values. Another out on both sides includes about 27% more of the data (green). The third standard deviation out adds another 4% of the data (blue). 5 The Normal Curve 6 Interpretations of the normal curve  Percentage of total space included between the mean and a given standard deviation (z distance from the mean)  Percentage of cases or n that fall between a given mean and standard deviation  Probability that an event will occur between the mean and a given standard deviation  Calculate the rank of scores in a normal distribution  Normalize a  Test the significance of observed measures in an

7 Interpretations of the normal curve The normal curve has 2 important pieces of information  We can view information related to where scores fall using the number line at the bottom of the curve.  I refer to this as the world  It can be expressed in raw scores or standard scores—scores expressed in standard deviation units  We can view information related to probability, percentages, and placement under the normal curve.  I refer to this as the area world

8 Interpretations of the normal curve 9 Does the normal curve really exist? “…the normal distribution does not actually exist. It is not a fact of nature. Rather, it is a mathematical model—an idealization—that can be used to represent data collected in behavioral research (Shavelson, 1996, p. 120). 10 Does the normal curve really exist? Glass & Hopkins (1996):  “God loves the normal curve” (p. 80).

1

 No set of empirical observations is ever perfectly described by the normal distribution…  but an independent measure taken repeatedly will eventually resemble a normal distribution  Many variables are definitely not normally distributed (i.e. SES) “The normal curve has a smooth, altogether handsome countenance—a thing of beauty” (p.83). 11 Nonnormal distributions Positively skewed—majority of the scores are near the lower numbers Negatively skewed—the majority of the scores are near the higher numbers Bimodal distributions have two modes 12 Positively skewed

 If a test was very difficult and almost everyone in the class did very poorly on it, the resulting distribution would most likely be positively skewed.  In the case of a positively skewed distribution, the mode is smaller than the median, which is smaller than the mean. The mode is the point on the x-axis corresponding to the highest point, that is the score with greatest value, or frequency. The median is the point on the x-axis that cuts the distribution in half, such that 50% of the area falls on each side. The mean is pulled by the extreme scores on the right.

13 Negatively skewed  A negatively skewed distribution is asymmetrical and points in the negative direction, such as would result with a very easy test. On an easy test, almost all students would perform well and only a few would do poorly. The order of the measures of would be the opposite of the positively skewed distribution, with the mean being smaller than the median, which is smaller than the mode. 14 Normal Curve Summary Unimodal Symmetry Points of inflection Tails that approach but never quite touch the horizontal axis as they deviate from the mean

15 Standard scores Standard scores assume a normal distribution  They provide a method of expressing any score in a distribution in terms of its distance from the mean in standard deviation units Z score T score 16 Z-score A raw score by itself is rather meaningless. What gives the score meaning is its deviation from the mean A Z score expresses this value in standard deviation units 17 Z-score 18 Z score formula 19 Z score computation 20 T score Another version of a

2

Converts from Z score Avoids the use of negative numbers and decimals

21 T score formula T=50+10z Always rounded to the nearest whole number A z score of 1.27=

22 T score formula T=50+10z Always rounded to the nearest whole number A z score of 1.27= T=50+10(1.27)=

23 T score formula T=50+10z Always rounded to the nearest whole number A z score of 1.27= T=50+10(1.27)=50+12.70=62.70=63

24 More on standard scores Any standard score can be converted to standard deviation units A test with a mean of 500 and standard deviation of 100 would be… 25 More on standard scores Any standard score can be converted to standard deviation units A test with a mean of 500 and standard deviation of 100 would be…

26 Calculating the distribution that fall before, between, or beyond the mean and standard deviation

27 Confidence Intervals Confidence intervals provide a of values given error in a score For example, if the mean = 20 then we can be  68% confidence that the score will be plus or minus 1 sd from the mean  95% confidence that the score will be plus or minus 2 sd from the mean  99% confidence that the score will be plus or minus 3 sd from the mean 28 Confidence intervals A mean of 20 with a sd of 5

3

 68% confidence that the score will be between 15 to 25  95% confidence that the score will be between 10 to 30  99% confidence that the score will be between 5 to 35

29 Confidence Intervals A mean of 48 and a sd of 2.75  68% confidence that the score will be between ____ to ____  95% confidence that the score will be between ____ to ____  99% confidence that the score will be between ____ to ____

30 Confidence Intervals A mean of 48 and a sd of 2.75  68% confidence that the score will be between 45.25 to 50.75  95% confidence that the score will be between ____ to ____  99% confidence that the score will be between ____ to ____

31 Confidence Intervals A mean of 48 and a sd of 2.75  68% confidence that the score will be between 45.25 to 50.75  95% confidence that the score will be between 42.5 to 53.5  99% confidence that the score will be between ____ to ____

32 Confidence Intervals A mean of 48 and a sd of 2.75  68% confidence that the score will be between 45.25 to 50.75  95% confidence that the score will be between 42.5 to 53.5  99% confidence that the score will be between 39.75 to 56.25

33 Correlation Relationship between two or more paired variables or two or more data sets Correlation = r or p Correlations range from -1.00 (perfect negative correlation) to +1.00 (perfect positive correlation) A perfect correlation indicates that for every unit of increase (or decrease) in one variable there is an increase (or decrease) in another variable

4

34 Positive correlation 35 Negative correlation 36 Low correlation 37 Types of correlations Pearson’s Product- Coefficient of correlation  Known as a Pearson’s r  Most commonly used Spearman Rank order coefficient of correlation  Known as Spearman rho (p)  Only utilized with ordinal values

38 Interpreting a correlation coefficient Be aware of outliers—scores that differ markedly from the rest of the Look at direction (positive or negative) and magnitude (actual number) Does not imply cause and effect See the table on p. 388 for interpreting a correlation coefficient 39 Interpreting a correlation coefficient Using the table on p. 388 interpret the following correlation coefficients:

+.52

-.78

+.12

40 Interpreting a correlation coefficient Using the table on p. 388 interpret the following correlation coefficients: +.52  Moderate -.78  Substantial +.12  Negligible 41 Computing a Pearson r 42 Computing Pearson r 43 Computing Pearson r

5

44 Computing Pearson r 45 Computing Pearson r 46 Correlational Designs We use correlations to explore relationships between two variables We can also use a correlation to predict an outcome—

 In statistics this is known as a 47 Correlational Designs For example, a correlation of =.60 that for every increase in X there is a .60 standard deviation unit increase in Y Correlational designs are different from experimental designs In a correlational design, we explore relationships between two or more variables that are interval or ratio In a correlational design we do not compare groups; we do not have (though we do have random )

48 Correlational Designs  For example, maybe we want to know the relationship between self-esteem and depression. We could use two instruments, one that measures self-esteem and one that measures depression. Then we can conduct a regression analysis and see if the scores on one instrument predict scores on the other

 I would hypothesize that high scores in depression correlation with low scores in self- esteem.

6