What is Psychology?

Marshall High School Psychology Mr. Cline Unit One AD * Psychological Scientific Measurement

• We are all capable of speculating about other people's behavior. We do it every day.

• That man who took my parking space is clearly arrogant and inconsiderate; that little girl screaming in the grocery store clearly has bad parents.

• But as normal as it is for us to make these kinds of assumptions and explanations, there is nothing reliable or scientific about any of it.

• Our spotty of little girls in grocery stores can't lead to any larger conclusion about the relationship between parenting and behavior.

• Psychology attempts to apply the to the study of human thought and behavior in order to reach conclusions that do have real explanatory or predictive power.

• Psychologists come up with a hypothesis about people--that overly indulgent parenting leads to temper tantrums, for example--and then try to design a study that will show it to be right or wrong. * Psychological Scientific Measurement

• There are many different kinds of studies, but all face the common problem of separating out the behavior you want to study from all the other kinds of behaviors that might happen along with it and the potential cause from other things that could cause it.

• Let's say you want to design a study to see whether parents who give into their child's every wish are likely to have children that will have tantrums.

• You might ask parents to fill out a to determine their parenting style; you then ask those who were especially indulgent and permissive to bring their child in to participate in a test of their ability to solve math problems.

• You tell them they'll get a candy bar as a reward for doing well. You don't really care about the children's math ability--that's just the pretense for getting them into your study. What you really care about is their behavior during the test.

• When they've finished, you look in your desk and then, in mock surprise, say 'well it looks like we're out of candy! Let me go ask a colleague if he has any.' * Psychological Scientific Measurement

• You leave the room for about five minutes, come back still emptyhanded, and apologize for not having any candy.

• Though some of the children seem disappointed but okay, many of them start to cry and demand candy.

• Once you've taken note of the reaction, you have a friend run in breathlessly with 'the last candy bar' and the children leave your study munching happily.

• You take a look at the numbers; 80% of the children exhibited some kind of 'tantrum' behavior when they found out they really weren't getting candy.

• You conclude that, just as your hypothesis stated, indulgent parents are associated with bratty children.

• But if you stopped here, you would have ignored one of the most important ideas in scientific testing: the scientific control. * Psychological Scientific Measurement

• Your study design does seem to distinguish between different potential behavioral reactions to a common situation: some kids cry, and others don't.

• But your study doesn't distinguish between potential causes; all of these children have indulgent parents, but you can't be sure that it's indulgent parenting and not some other factor that causes 80% of them to cry.

• To help determine this, you'd have to bring in an equal number of children selected randomly and submit them to the same test; this is your control group.

• You find that about 80% of these kids cry as well when they're denied candy.

• This that your original conclusion, that indulgent parenting is associated with bad behavior, is actually not supported by the results; all you can really say is that being denied promised candy makes a significant portion of all children cry. * Psychological Scientific Measurement

• There are two possible explanations for this: one is that parenting style has nothing to do with behavior.

• The other, perhaps more likely explanation is simply that your study upsets children too much to learn anything valuable from it.

• Think of some tests you've taken--some are really easy, like a math test in which the only question was 1+1, and everyone who takes them gets 100%.

• Getting 100% on the test doesn't have very much to do with your preparation or your intelligence--it has more to do with the question being really easy.

• But if you're given a harder math question, like finding the integral of 2n, your score means more; if only a few people are getting 100%, this probably means that they have prepared better than everyone else.

• Promising kids candy and then refusing to give it to them is like the really easy test--most of them cry, but that's probably just because it's a really upsetting situation. * Psychological Scientific Measurement

• If you designed a milder scenario that didn't cause as many tantrums, it might be able to tell you more about the differences between children who still did have tantrums and those who didn't.

• This parenting style/children's behavior/candy bar test is made-up, but it should give you some idea of the questions that psychologists have to ask themselves when designing studies and interpreting the results.

• The principles of the scientific method are very important for psychologists, who come up with hypotheses about behavior and then try to prove them right or wrong.

• Scientific control is crucial to getting results that sufficiently pull apart different potential behaviors and causes.

• Validitiy and Reliability

• Designing a psychological study isn't really that hard.

• If you've ever written a survey or taken a poll among your friends, you've conducted some crude . * Psychological Scientific Measurement

• Validitiy and Reliability

• But designing a study that produces valuable and scientific results is really challenging.

• If you gave your friends a survey about their political leanings, they might be influenced by the way you phrased the questions or by knowing your own political opinions; your survey might not accurately measure what you think it does.

• Two key concepts for designing scientific psychological studies are reliability and validity.

• We'll look at some examples of both to better understand the importance of careful research design.

• Do you have a friend or family member who will always help you out if you ask?

• You'd probably describe this friend as reliable. * Psychological Scientific Measurement

• Validitiy and Reliability

• Reliability in psychological research isn't really that different - it means that your tools for measuring a given variable measure it accurately and consistently.

• If you use a rigid ruler to measure the length of your foot, you should always get the same length; this is a measurement that has test-retest reliability.

• But if instead you measure your foot by holding your fingers about an inch apart and then moving them down the length of your foot and counting as you go, you'd probably come up with different measurements each time.

• Your fingers are not a particularly reliable measurement tool for the length of your foot.

• Important to designing and interpreting psychological research is the idea of validity. * Psychological Scientific Measurement

• Validitiy and Reliability

• In general, validity refers to the legitimacy of the research and its conclusions: has the researcher actually produced results that support or refute the hypothesis?

• It can be easier to understand validity by looking at some of the ways that research can not be valid.

• A study fails construct validity if what it chooses to measure doesn't actually correspond to the question it's asking.

• Let's say you were doing research that required you to know how intelligent your subjects were.

• To measure intelligence, you decide to administer a really difficult physics exam.

• If you did this, your would lack construct validity because a score on a physics exam doesn't really measure intelligence; it just measures whether you've taken physics or not. * Psychological Scientific Measurement

• Validitiy and Reliability

• Internal validity has to do with confirming that a causal relationship you've found between your variables is actually real.

• Even if you think you've found a definite relationship between changing one variable and observing change in another, you could be inadvertently changing something else that is actually causing the effect.

• As an example, let's say you wanted to test whether certain colors of fonts help people remember information better than others.

• You give your subjects two texts, one in green and one in red.

• The red text is about celebrity gossip; the green text is about chemistry.

• You find that your subjects remember the red text much better and conclude that red font helps memory. * Psychological Scientific Measurement

• Validitiy and Reliability

• But by having two different texts, one much more easily memorable than the other, you introduced a confound into your experiment.

• You don't know whether your effect is caused by the red font or by the more interesting content.

• A third type of research validity is external validity, which has to do with your conclusions applying to more people than just the ones you tested.

• Though psychologists might like to test everyone, doing so would be absurdly expensive and time-consuming.

• So instead, psychologists take a sample of the population they want to study.

• This sample group is usually selected at random, based on who volunteers to participate. * Psychological Scientific Measurement

• Validitiy and Reliability

• Psychologists try to get bigger groups to control for random variations.

• Usually the results found in the sample are assumed to generalize, unless there are compelling reasons to question it: for example, if a study on attitudes toward aging had only college students for subjects, it might fail external validity.

• In general, reliability and validity are principles related to making sure that your study is actually testing what you think it is.

• Reliability makes sure that your test measures its variables accurately.

• Validity ensures that your measures and variables are telling you what you think they should; that your questions assess the right variables, that your experimental results have no confounds and that your results generalize like you think they will. * Psychological Scientific Measurement

• Once psychologists have carefully chosen a study design appropriate for their subjects, thought carefully about their variables and measurements, selected a sample group and run their tests, they're typically faced with a mountain of .

• It could be anything from survey results to maps of brain activity.

• In order to make the experimental process worthwhile, psychologists must now find ways to interpret and draw conclusions from their data.

• They ultimately want to test whether the data supports or rejects their hypothesis.

• In order to do this, psychologists use statistical analysis.

• They make use of two main types of : descriptive and inferential.

help psychologists get a better understanding of the general trends in their data, while inferential statistics help them draw conclusions about how their variables relate to one another.

• Descriptive statistics are basically what they sound like: they describe and summarize a set of data. * Psychological Scientific Measurement

• Descriptive statistics could be things like the average age of participants or how many were men and women.

• Your GPA is a descriptive ; it summarizes how you've done in school.

• These kinds of statistics generally make use of averages, also known as the of the data, to summarize the data set.

• There are three kinds of averages that you may have learned about in math class: , , and .

• The mean is what's most commonly associated with average; it's when you add up a set of numbers and then divide by how many are in the set.

• Let's say you did a survey of how many donuts per week your neighbors eat.

• Only five of your neighbors respond, giving you a data set that looks like this: {1, 2, 2, 2, 13}. * Psychological Scientific Measurement

• The mean number of donuts your neighbors eat is (1+2+2+2+13)/5, or four.

• But since one of your neighbors is an outlier and eats way more donuts per week than the others combined, the median or mode might be a better measure of central tendency for this data set.

• The median of a set is just the number that divides the set in half if you've ordered it from least to greatest - so in this case, two, or the number in the middle.

• The mode is the most frequently repeated number in the set - in this case, also two.

• You can remember mode by just replacing the last two letters--mode is 'most.'

• Though the mean is often a great tool for measuring central tendency, in this case two donuts per week is much more realistic than four. * Psychological Scientific Measurement

• Descriptive statistics also address the dispersion of a set, or how widely its elements vary.

• The and are related measures that can give psychologists a sense of this for their data.

• Both tell psychologists how far from the mean an individual data point is likely to be.

• So while the data set of your donut survey has an outlier (13), the equations to calculate standard deviation and variance take the probability of each result into account.

• Since 13 doesn't have as high of a probability as two, it doesn't weigh as heavily into calculating how widely responses are likely to vary. • Inferential statistics can be used to draw conclusions from the data that descriptive statistics describe.

• Researchers can look at their data and determine how likely it is that changes in one variable caused changes in another or that two variables seem to be related to one another. * Psychological Scientific Measurement

• These conclusions can help them determine whether the data supports or rejects their hypothesis.

• Let's say you conducted a few other surveys of your neighbors, attempting to relate donut consumption to weight.

• You get results back that seem to confirm your hypothesis that higher donut consumption is associated with higher weight; the 13 donut per week neighbor is the heaviest of the bunch.

• But before you condemn donuts, you need to show that your results have .

• When psychologists look at data, they perform a variety of statistical tests to confirm that their correlations aren't just a result of chance.

• Psychologists have agreed that if a result has a less than five percent chance of occurring due to chance, it can be called statistically significant. * Psychological Scientific Measurement

• If results are significant, they can be used to support or reject hypotheses.

• Statistical analysis is a complicated and important part of psychological research, and I only provide a brief analysis of introduction here!