Experimental Psych Lecture Notes PDF
Document Details
Uploaded by RosyPlatinum
Krisette Romero, RPM
Tags
Summary
These notes cover the basics of experimentation in psychology. The document outlines the learning objectives, discusses forming hypotheses, and explains the concepts of independent and dependent variables in experimental research.
Full Transcript
PSYCH 30075 Experimental Psych. Second Sem. Krisette Romero, RPm The basics of experimentation Lesson overview Experimental psychology is an undergraduate psychology course designed to provide students with knowledge about and hands-on practice with experimental research m...
PSYCH 30075 Experimental Psych. Second Sem. Krisette Romero, RPm The basics of experimentation Lesson overview Experimental psychology is an undergraduate psychology course designed to provide students with knowledge about and hands-on practice with experimental research methods in psychology. Learning objectives Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments. Explain what internal validity is and why experiments are considered to be high in internal validity. Explain what external validity is and evaluate studies in terms of their external validity. Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each. Recognize examples of confounding variables and explain how they affect the internal validity of a study. Forming a hypothesis an experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables. (We will discuss extraneous variables in later lessons) Forming a hypothesis A hypothesis is a very specific testable statement that can be evaluated from observable data. For example, we might hypothesize that drivers older than sixty-five years would have a higher frequency of accidents involving left turns across oncoming traffic when driving at night than do younger drivers. By looking at police records of accident data, we could determine, with the help of some statistics if this hypothesis is incorrect. Generalization is a broader statement that cannot be tested directly. For example, we might generalize that older drivers are unsafe at any speed and should have restrictions, such as not being able to drive at night, on their driver’s license. Since “unsafe at any speed” is not clearly defined, this is not a testable statement. Similarly, the generalization does not define an age range for older drivers. However, it can be used to derive several testable hypotheses. Forming a hypothesis HYPOTHESES MUST BE SYNTHETIC, TESTABLE, FALSIFIABLE, PARSIMONIOUS, AND (HOPEFULLY) FRUITFUL. Synthetic statements can either be true or false; experimental hypotheses must be a synthetic statement; If-then form; e.g.: “hungry students read slowly” could be proven true or false Non-synthetic statements should be avoided -analytic statement: one that is always true; e.g.: “I am or am not pregnant” -contradictory statement: statements with elements that oppose each other; they are always false; ex: “I do or do not have a brother” neither need to be tested because the outcome is already known Forming a hypothesis HYPOTHESES MUST BE SYNTHETIC, TESTABLE, FALSIFIABLE, PARSIMONIOUS, AND (HOPEFULLY) FRUITFUL. Examples of testable hypothesis: Students who attend class have higher grades than students who skip class. This is testable because it is possible to compare the grades of students who do and do not skip class and then analyze the resulting data. Another person could conduct the same research and come up with the same results. People exposed to high levels of ultraviolet light have a higher incidence of cancer than the norm. This is testable because it is possible to find a group of people who have been exposed to high levels of ultraviolet light and compare their cancer rates to the average. Forming a hypothesis HYPOTHESES MUST BE SYNTHETIC, TESTABLE, FALSIFIABLE, PARSIMONIOUS, AND (HOPEFULLY) FRUITFUL. Falsifiable statements are “disprovable” by research findings; need to be worded so that failures to find the predicted effect must be considered evidence that the hypothesis is false. Parsimonious statements: simplest explanation is preferred. Forming a hypothesis HYPOTHESES MUST BE SYNTHETIC, TESTABLE, FALSIFIABLE, PARSIMONIOUS, AND (HOPEFULLY) FRUITFUL. Fruitful implies that in the process of testing the conjecture numerous rich new ideas are generated, producing a deeper understanding of the phenomenon being examined. Sometimes it is even suggested that a fruitful though mistaken hypothesis is to be preferred to a correct hypothesis which propagates few ideas. VARIABLES: IV and DV VARIABLES: IV and DV All experiments require at least the two special features, the independent and dependent variables. The dependent variable is the response measure of an experiment that is dependent on the subject. In this case, the time that elapsed until someone else sits down at the table is the dependent variable or response measure. The independent variable is a manipulation of the environment controlled by the experimenter: In this case, it is the strewing of articles on the table. An experiment must have at least two values, or levels, of the environment. These levels may differ in a quantitative sense or the levels may reflect a qualitative difference. How might we change the procedure to obtain an experiment? The simplest way would be to sit down again, this time without scattering anything. Then our independent variable would have the necessary two levels: the table with items The point is that at least two conditions must be compared strewn about and the bare table with no items strewn about. Now we have something to compare with the first condition. with each other to determine if the independent variable produces a change in a behavior or outcome. VARIABLES: IV and DV VARIABLES: IV and DV Variables are the gears and cogs that make experiments run. Effective selection and manipulation of variables make the difference between a good experiment and a poor one. This section covers the three kinds of variables that must be carefully considered before starting an experiment: independent, dependent, and control variables. We conclude by discussing experiments that have more than one independent or dependent variable. The main advantage of experiments is better control of extraneous variation. In the ideal experiment, no factors (variables) except the one being studied are permitted to influence the outcome; in the jargon of experimental psychology, we say that these other factors are controlled. In true experiments, independent variables are those manipulated by the experimenter. Independent variables are selected because an experimenter believes they will cause changes in behavior. The dependent variable is the response measure of an experiment that is dependent on the subject’s response to our manipulation of the environment. In other words, the subject’s behavior is observed and recorded by the experimenter and is dependent on the independent variable. VARIABLES: IV and DV Variables are the gears and cogs that make experiments run. Effective selection and manipulation of variables make the difference between a good experiment and a poor one. This section covers the three kinds of variables that must be carefully considered before starting an experiment: independent, dependent, and control variables. We conclude by discussing experiments that have more than one independent or dependent variable. One criterion for a good dependent variable is stability. When an experiment is repeated exactly—same subject, same levels of independent variable, and so on—the dependent variable should yield the same score as it did previously. Null results can often be caused by inadequacies in the dependent variable, even if it is stable. The most common cause is a restricted or limited range of the dependent variable, so that it gets “stuck” at the top or bottom of its scale. This is called a floor effect. The opposite problem, getting 100 percent correct, is called a ceiling effect. Ceiling and floor effects prevent the influence of an independent variable from being accurately reflected in a dependent variable. Reliability and Validity Reliability refers to the consistency of experimental operational definitions and measured operational definitions. Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable. Interrater reliability is the degree to which observers agree in their measurement of the behavior. E.g. the degree to which three observers agree when scoring the same personal essays for optimism. Test-retest reliability means the degree to which a person's scores are consistent across two or more administrations of a measurement procedure. E.g. highly correlated scores on the Wechsler Adult Intelligence Scale-Revised when it is administered twice, 2 weeks apart. Inter-item reliability measures the degree to which different parts of an instrument (questionnaire or test) that are designed to measure the same variable achieve consistent results. Reliability and Validity Validity is the main extent to which a concept, conclusion or measurement is well-founded and likely corresponds accurately to the real world. If a study is valid then it truly represents what it was intended to represent. Experimental validity refers to the manner in which variables that influence both the results of the research and the generalizability to the population at large. It is broken down into two groups: (1) Internal Validity and (2) External Validity. An empirical study is said to be high in internal validity if the way it was conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Thus, experiments are high in internal validity because the way they are conducted— with the manipulation of the independent variable and the control of extraneous variables— provides strong support for causal conclusions. An empirical study is high in external validity if the way it was conducted supports generalizing the results to people and situations beyond those actually studied. As a general rule, studies are higher in external validity when the participants and the situation studied are similar to those that the researchers want to generalize to and participants encounter everyday, often described as mundane realism. Reliability and Validity Construct Validity In addition to the generalizability of the results of an experiment, another element to scrutinize in a study is the quality of the experiment’s manipulations or the construct validity. Operationalization - The conversion from research question to experiment design. In the phenomenon studied, social loafing, Darley and Latané operationalized the independent variable of diffusion of responsibility by increasing the number of potential helpers. In evaluating this design, we would say that the construct validity was very high because When designing your own experiment, consider how well the experiment’s manipulations very clearly speak to the research question is operationalized your study. the research question; there was a crisis, a way for the participant to help, and increasing the number of other students involved in the discussion, they *a construct is the abstract idea, underlying theme, or provided a way to test diffusion. subject matter that one wishes to measure. Reliability and Validity Statistical validity concerns the proper statistical treatment of data and the soundness of the researchers’ statistical conclusions. When considering the proper type of test, researchers must consider the scale of measure their dependent variable was measured on and the design of their study. One common critique of experiments is that a study did not have enough participants. The main reason for this criticism is that it is difficult to generalize about a population from a small sample. The proper statistical analysis should be conducted on the data to determine whether the difference or relationship that was predicted was found. The number of conditions and the total number of participants will determine the overall size of the effect. Summary