Psychological Science Handout 2 Page 2.1 _ _

HANDOUT 2 RESEARCH DESIGNS

The Whole Picture Principle: Research scientists are so caught up in their own narrow endeavors that they have difficulty seeing the whole picture of anything, including their own research.

BASIC CONCEPTS

Research

According the Kerlinger, the purpose of research is theory. Research that is not theory based is simply journalism - a reporting of facts with no connecting thread. Research serves three purposes:

1 - theory building: to generate a theory where none currently exists 2 - theory confirmation: to test propositions derived from our theory, and thus test the soundness of our theoretical conceptualizations 3 - theory modification: to seek out the boundaries of our theory; to identify the areas where the theory is not adequate at explaining the phenomenon we are interested in addressing

Good research extends our knowledge into new areas, and makes known information that was previously unknown. More importantly, research helps us identify the fundamental relationships between events in our universe. Research depends on the four ways of knowing.

Dogma is an appeal to authority. In using this approach we ask the people who are authorities in the area to explain the nature of the relationships. The benefit of dogma is that we don't have to re-learn what someone else has discovered - we can continue from where they stopped. The problem with dogma is that we can never know more than the authorities know.

Reason is the use of logic and rational thought to determine what is and what isn't. Science proceeds from the proposition that while reality may transcend logic, it does not defy logic. The problem with reason is that two things may both be logically possible, but mutually exclusive - in other words, if one explanation is correct, the other one cannot be. While reason tells us how our world CAN go together, it does not always tell us how our world DOES go together.

Phenomenology is the use of subjective experience in determining how the world is put together. However, when two people have different subjective experiences, phenomenology cannot tell us who has the "right" or "correct" experience. Phenomenology is helpful in the creative process, and without our subjective (or intuitive) processes, creativity (even in science) is not possible. However, it is not useful in resolving disputes about the nature of the world.

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.2 Research Designs _ _

Empiricism is the use of objective measurement and observation to determine reality. Within the empirical method, only that which can be observed and agreed to by all is considered "real" and used to validate or build theories.

In reality, all four methods are used to build, test, and modify the theories we use in science.

Theory

A theory is a set of interrelated propositions about the relationships between concepts. In this definition of theory, we have a number of important issues:

Proposition - A proposition is a principle connecting two or more concepts by stating the mechanism that holds them together. For example, low self esteem (concept 1) causes depression (concept 2) by increasing the negative thoughts (concept 3) one has about oneself (the mechanism connecting concepts 1, 2 and 3). We can have two different types of propositions: specific and general.

Specific: A specific proposition is very limited, and generally involves only 2 or 3 concepts.

General: A general proposition is more global and often combines specific propositions into more wide ranging generalizations about a phenomenon.

Relationship - A relationship is the manner in which two concepts go together. There are only three types of relationships: positive (increasing A also increases B), negative (increasing A decreases B and vice versa), and ambivalent (increasing A increases B only under certain conditions; under other conditions, increasing A decreases B).

Concepts - A concept is a single "thing," specific entity or mental image. For example, "intelligence" is a concept in that it is a single entity in a theory. It is important to note that concepts, in and of themselves, have no truth value. That is, you cannot debate the validity of a concept. Thus, a concept such as "unconscious" may or may not be useful in describing our topic of study, but it is not "valid" in and of itself.

Construct - Once you define a concept, you have a construct. When you define a concept, you can now begin to rationally debate and argue the "thing" about which you are talking. A construct, like a concept, has no objective truth value. In other words, the construct may be a useful tool to use in describing reality, but it is not reality in and of itself.

Operational Definition - With an operational definition, we now get into objective, empirical truth. An operational definition tells us the operations we will perform in order to observe the

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.3 _ _

construct we have defined. In other words, we need to decide how we are going to observe the concept. The validity of the operation definition depends on the operational definition involved.

So far we have the diagram at the top of the next page.

Philosophical Issues

Science is interested in the development (rational) and testing (empirical) of theories. At issue is the search for cause and effect relationships. A basic premise of psychology is that behavior is caused by something - it doesn't just arise spontaneously. Determinism is a guiding principle of research: everything is determined by something.

Aristotle's four causes:

Material: What a thing is made out of, the material that goes into it Formal: The form and shape a thing takes; the "pattern" followed in making that thing Motive: The impetus that moved things to create the thing Final: The end result toward which the thing moves

Necessary & Sufficient Causes

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.4 Research Designs _ _

Necessary: A cause that must exist in order for the thing to come into being Sufficient: A cause that, by itself, will cause a thing to come into being

Determining causal connections is often more a logical, rational endeavor than it is an empirical one. In connecting events to one another in a causal chain of events, it is important to remember a few basic principles. These principles are outlined below.

Temporal nature of cause: After cannot cause before. Causes occur before effects, and not after them. "Sticky" versus "Loose" variables: "Sticky" variables are those that rarely change, are hard to change, and generally have little variation over the life of an idividual. "Loose" variables are those than often change, are easy to change, and have a large variability over the life of an individual. In general, sticky variables are the causes, and loose variables tend to be the effects. The more "sticky" a variable, the less likely it is to be affected by something else; the more "loose" a variable is, the more likely it is to be caused by something else.

Truth: In science, Truth is conceptualized as the way in which the world is put together. However, truth is not a unitary thing but is composed of different types: necessary truths, contingent truths, and impossibilities. Truth can be seen as existing on a continuum from the necessary to the impossible, as below.

true false

+------+ necessary impossible

| contingent

possible

Necessary Truth: Something which, by definition, is true. It makes no sense to check out a necessary truth as it cannot possibly be false. Impossibility: Something which, by definition, cannot be true. For example, the question: "Are bachelors married?" is silly to ask since the answer is apparent in the definition of the term "bachelor." Contingent: Something which can be either true or false. Only observation will determine the truth value of a contingent truth. These types of truth are the proper object of research in psychology.

MEASUREMENT ISSUES

Measurement is the hallmark of a science. An axiom in science is that "if something exists, it can be measured." In developing a research , one

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.5 _ _ always comes to the question of how the concepts shall be measured. Measurement should always follow your construct definition and be guided by that definition. The selection of an appropriate measurement device is one of the hardest things in research. In psychology, the is the most common form of measurement.

Questionnaires can include the following types of or reporting methods:

- - - rating scales - self-report inventories - psychological tests

The thing all of these have in common is that we are asking the subject (or a significant other in the case of a rating scale) to report on his or her own behavior, feelings, beliefs, attitudes, etc. There are a number of problems with the questionnaire, such as the honesty of the person filling it out, and thus an alternative to the questionnaire is the direct observation of behavior by the researcher. Regardless of the type of measurement/observation being made, the basic steps in measurement selection will apply.

Step 1: Decide WHAT To Measure

This step may sound redundant, yet many research projects have failed miserably because the researchers failed to adequately consider what they needed to measure. Before we can put together our questionnaire packet or begin to collect data, we need to have some idea of what it is we are looking for. By looking at all of the implications of what we are measureing, both theoretical and practical, we can avoid major pitfalls later on. Another way of looking at this is to ask ourselves what the basic concepts are.

There are two major ways of determining what to measure:

1. Theory. We can use an established theoretical perspective to tell us what is important to measure and what is not.

2. Empirical. We can look to previous research to tell us what other people have found to be important concepts to measure. We can also use our own experience (based on objective observation, of course) to guide us.

Both of these methods have their proponents, and each one is useful in its own way. Some researcher advocate combining the two methods and using theoretical concepts which have been previously justified.

Step 2: Decide HOW to Measure

Once you have decided what constructs you want to measure in your research, you can get to the next step and decide how to do it. Here the choices become complex and there is no right or wrong way to do things.

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.6 Research Designs _ _

Choice 1 - WHO will report the information?

self-report : the subject reports on him/her self ratings : someone who knows the subject reports observation : the researcher observes and reports

Choice 2 - TYPE of information reported

performance : how well the subject does on a task personality : person's general response to stimuli interests : person's expressed desires attitudes : person's attitude toward stimuli

Choice 3 - FORMAT of observed/reported information

forced choice likert scales free choice checklist open-ended behavioral counting q-sorts

Step 3: Selecting Tests

A good place to go to select tests is the Mental Measurements Yearbook (which is published every 4 to 5 years). This "yearbook" lists every published test in existence along with references and reviews so you can evaluate the appropriateness of the test.

The two aspects of a test which are extremely important are the reliability and the validity of the test. Reliability is concerned with the accuracy of the test. The higher the reliability, the more accurate the test is. Validity is the degree to which the test measures what it is supposed to measure. In doing research, you will want to select a test which most closely matches the construct you have developed and which appears to measure that construct fairly closely.

Step 4: Putting it Together

Most researchers put their together into a "packet" to be given to subjects. These "packets" generally contain a face sheet which explains the research and asks for informed consent, a demographic data sheet asking for age, gender, length of hair, etc. Finally, the tests or measures being used come last. It is suggested that you "pre-test" your packet on a of people and get their reactions to the questionnaires. You can also get an idea of how long the questionnaire will take people to answer, problems with the wording of the questions, and so on. It is easier to modify your study before you begin collecting data than after you have collected half (or more) of the information needed.

THE 7 STEPS OF RESEARCH

Step 1 - Select Topic: Choose a specific area of psychology that interests you - the more specific the area the better.

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.7 _ _

When you have selected an area, get a book on that topic - preferably a new edition (no more than 4 to 5 years old) and preferably an edited volume.

As you read through this book, keep notes on the important issues in this topic.

- what are the major concepts - how are these concepts defined (what are the constructs used?) - what are the cause-effect relationships between the concepts (what are the basic propositions?)

Step 2 - Develop Questions: Research questions deal with how concepts are related to one another. Focus on the contingent truths that come out of the previous research in the area. As you ask questions, you should be narrowing your topic down even further. Your research should be focused on one or two questions only as it is impossible to answer more than that in a research project.

Engineering Questions: These questions concern how to do something. Examples include: How can integration be achieved? How can we improve the conditions of the poor? As you can see, questions deal with "how to" types of issues. Value Questions: These questions involve which of two things is better or worse and involve terms such as "good," "bad," "better," "best" or "desirable." Examples of value questions are: "Should we integrate the schools?" "Is psychology of benefit to society?"

Science cannot answer engineering or value questions. Though these questions are very important and deserve an answer, they cannot be empirically tested because human judgment is involved. Instead, science focuses on testable propositions that contain variables that can be measured or manipulated. As a result, these propositions can be shown to be correct or incorrect.

Step 3 - Formulate Hypotheses: An is a tentative answer to the . It is a conjectural statement of a relationship between two or more variables, and is a declarative statement. An hypothesis should be stated in a causal manner: A causes B (simple) or A and B cause C in the presence of D but cause E in the presence of F (complex)

Step 4 - : In this step, you need to decide what information you need to test your hypotheses, who you need to obtain that information from, and how you will obtain the information. There are four parts to any research design:

1 - Subjects: Who will the subjects be? How will they be recruited? What characteristics must they have? How many will you need? 2 - Independent Variable(s): What are the independent variables? How will they be observed and/or manipulated?

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.8 Research Designs _ _

3 - Dependent Variable(s): What are the dependent variables? How will they be observed? How are they affected by the independent variables? 4 - : How will you conduct the data collection?

Step 5 - Collect Data: Data collection depends upon the type of design being used. In , one collects scores and turns one's observations into a series of measurements. In , one records observations without regard to amount or measurement.

Step 6 - Analyze Data: This is the more complex aspect and involves organizing one's data into meaningful units. If not organized, the data could easily overwhelm one and not produce any meaningful conclusions. In qualitative research, one organizes data according to themes. In quantitative research, data organization is according to scores.

Step 7 - Interpret Data: Discuss what you discovered with your research, and how it affects the theory. Good research will generate questions which can then be answered with future research (keeping researchers in business for some time to come).

RESEARCH DESIGN

Research Validity

Research validity is concerned with whether or not the research conclusions are valid. The question which research validity centers around is: Does the research show what you say it shows? There are two types of research validity: internal and external.

Internal Validity: Internal validity refers to causal relationships. That is, can the researcher infer a cause and effect relationship in his/her study? To be exact, internal validity is the extent to which rival hypotheses can be ruled out. This is "internal" to the research and is best controlled by the type of design used.

Threats to internal validity are those things which cn affect your ability to infer causal relationships in your study. The classic threats to internal validity are:

1 - History : This refers to changes in the subjects which is brought about by something other than the treatment. This refers to the personal history of the subjects. This is external as something outside the subjects happens to change them. 2 - Maturation : These are changes within the subject that cause change in the subject's behavior or performance. This is an internal event as something changes inside the subject. 3 - Testing : Repeated exposure to a test can affect the scores one obtains on a test. The testing effect is therefore the result of familiarity with a test. 4 - Instrumentation : This is systematic error in the measuring instrument. For example, if a treatment group is measured with a

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.9 _ _

device that consistently over-estimates their performance, then the improvement is due to the instrument and not the treatment. 5 - Statistical Regression : Extreme scores tend to move toward the mean over time regardless of treatment (or lack tehrof). 6 - Mortality : If a particular type of therapy has many drop outs, is the effect of this therapy due to the treatment, or the fact that those who were not getting better left? 7 - Selection : The research results are biased because subjects may have agreed to participate in the research because of something which influences the dependent variable.

External Validity: External validity is the extent to which the results can be generalized to other people, places, or times. This is best controlled by subject selection, instructions, and observational methods.

The classic threats to external validity are listed below. These are things which can prevent you fom generalizing information generated from your research.

1 - Accessible versus target population : Is the population we want to talk about the same as the one we obtained our sample from? 2 - Interaction of treatment and subject : Is our treatment having an effect, or is the effect confounded by subject characteristics? 3 - Description of the variables measured : What one person calls aggression another person may label assertiveness. 4 - Interaction of history and treatment : Social historical events occurring at the time of the research may interact with the variables to produce an interaction with them. 5 - Interaction of time of measurement and treatment : The time at which a variable is measured may impact on the strength of that variable. 6 - Pretest and posttest sensitization : The pretest or posttest may tip off the subject as to what the research was all about thus encouraging him/her to react in a desired manner. 7 - Hawthorne effect : observing behavior changes behavior. 8 - Rosenthal effect : experimenter bias - The researcher finds out what he/she wants to find out 9 - Novelty effect : When a person is placed in a novel situation, he/she reacts differently than normal.

Classic Research Designs

The following are the "classic" research designs. It would behoove you to memorize these for the licensing exam. For this course, it is important only to be familiar with the basic types of designs.

True experimental designs: These designs offer the best control over threats to internal validity. True experimental designs have three characteristics: they have direct manipulation of the independent variable; a control group which does not receive the treatment is used; subjects are randomly assigned to the groups.

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.10 Research Designs _ _

1 - Pretest-Posttest Control Group Design : Comparison of the control group to the treatment group before and after treatment. 2 - Posttest only control group design : Comparison of the treatment group to a cotrol group aftre treatment only. 3 - Solomon four group design : combination of 1 and 2, above. Controls for the effects of pretest sensitization and testing.

Quasi experimental designs: With quasi-experimental designs, individuals are no assigned to groups. Instead, intact groups are used as they exist, and the treatment is/is not given to selected groups of subjects. Experimental control is limited due to this use of intact groups.

1 - Nonequivalent control group design : Two groups are compared after a treatment that one group gets.

2 - Separate sample pretest-posttest design : From a single group, randomly select those to be measured before treatment, and those to be measured after treatment. 3 - Single group time series design : A group of subjects is measured repeatedly before and after a treatment is given. 4 - Multiple group time series design : More than one group is repeatedly measured before and after on some of the groups receive the treatment.

Pseudo experimental designs: These designs do not have any controls for threats to internal validity. Though the independent variable is manipulated, making these designs experimental, there are no controls nor is there random assignment. The basic types of these designs are:

1 - One-shot : Observation of a single individual or group after receiving a particular treatment. 2 - One group pretest-posttest design : Observations are made before and after a treatment has been given to a single group. 3 - Static group comparison design : Compare two groups - one which receives a treatment and the other which does not - after the treatment has been given to the first group. This does not involve random assignment to groups, nor does it involve random assignment of groups to treatment.

Single subject designs: These designs are very popular with behavioral psychologists. In single subject designs, one uses a single subject to study the effects of a treatment.

1 - A-B-A design : Observe a behavior prior to treatment (A), during treatment (B), and after treatment is withdrawn (A). 2 - A-B-A-B design : Same as A-B-A designs except thast treatment is reinstituted after withdrawal of treatment. 3 - Mulitple baseline design : similar to A-B-A and A-B-A-B designs except that one observes either (a) multiple behaviors in one setting, or (b) a single behavior in multiple settings.

Correlational Designs: A correlation design is one in which there is no attempt to manipulate the independent variable(s). A design is correlational

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.11 _ _ not because of the use of the correlation coefficient, but because one is simply looking at relationships between variables to imply causation. Because of the lack of manipulation of the IV(s), there is a limited ability tomake causal inferences.

1 - Simple correlation : the relationship between only one IV and 1 DV 2 - Multiple correlation : the relationship between multiple IVs and 1 DV 3 - Canonical correlation : the relationship of multiple IVs with multiple DVs. 4 - Cross lagged panel design : use of correlation to tease out causal connections. 5 - Path analysis/structural equations modeling : use of correlation to make inferences about complex causal connections.

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.12 Research Designs _ _

ASSIGNMENT 2 RESEARCH DESIGN/DATA COLLECTION (45 POINTS)

In order to complete this homework assignment, you will need to get with four of your friends in this class (a total of 5 people in each group). Each of you will need to give the questionnaire to five people, and pool your data together (i.e., share questionnaires). You should end up with 25 subjects.

1. Using the pooled data, each of you needs to turn in a properly completed data list using the data from the questionnaires. (5 points)

2. Using SPSS-PC+:

a. enter the data into the computer (2 points) b. have SPSS-PC+ calculate a total score for depression (3 points) c. save the data as a system file on your own floppy disk and TURN IN YOUR FLOPPY DISK. (5 points) The commands you could use are included in this handout.

3. Develop two questions that could be answered using the questionnaire. Turn those questions into hypotheses. (10 points)

Question 1: ______

Hypothesis 1:______

Question 2: ______

Hypothesis 2:______

4. Don Templer believes that season of birth has an influence on the severity of a person's schizophrenia symptoms. He uses 15 subjects with schizophrenia, identifies their date of birth, and measures all subjects on his schizophrenia symptoms scale. a. What is Don's research topic? (1 pt) ______b. What is/are Don's questions? (1 pt) ______c. What are the concepts used by Don? (2 pts) ______d. How were these concepts connected? (2 pts) ______e. What type of research design is Don using? (2 pts) ______f. What are the threats to internal validity in this study? (2 pts) ______

_ _ Cappelletty - Aug-13 Psychological Science Handout 2 Research Designs Page 2.13 _ _

5. Kevin O'Connor wants to assess the effectiveness of play therapy as a treatment modality for conduct disordered pre-teens. He believes that permitting children to cathect their anger in play therapy will reduce the incidence of aggressive behavior in the school. He assesses a group of students who have had play therapy for six months with a group of students who have not had play therapy on the Burks' Behavior Rating Scales. a. What is Kevin's research topic? (1 pt) ______b. What is/are Kevin's questions? (1 pt) ______c. What are the concepts used by Kevin? (2 pts) ______d. How were these concepts connected? (2 pts) ______e. What type of research design is Kevin using? (2 pts) ______f. What are the threats to internal validity in this study? (2 pts) ______

_ _ Cappelletty, Aug-13 Handout 2 Psychological Science Page 2.14 Research Designs _ _

Suggested SPSS-PC+ Commands (NOTE: DO NOT type in the information in the brackets { and } - this is for your information only)

Step 1 - Turn on the computer

Step 2 - Select SPSS-PC+ from the menu; press the [ENTER] key

Step 3 - Press [ALT-M] to get rid of the stupid menus

Step 4 - Set the environment the way you want it.

SET PRINT ON. SET AUTOMENU OFF.

Step 5 - specify the data you want to use as follows:

DATA LIST FREE/ AGE GENDER HOME EDUC MARITAL YRSMAR PROBLEMS ITEM01 TO ITEM10. BEGIN DATA.

{your data entered here}

END DATA.

Step 6 - modify the variables and set missing values

MISSING VALUES ALL(99). RECODE { recode the "reverse scored" items } COMPUTE DEPTOTAL = ITEM01 + ITEM02 {etc. etc.} SAVE OUTFILE='MYNAME.SYS'.

Remember to: 1) take the cursor to the beginning line in each step and 2) press [ALT-C] at the end of each step (from step 4 on) to get SPSS-PC+ to run your commands

_ _ Cappelletty - Aug-13