Experimental Psychology: Chapter 5

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is a hypothesis?

A hypothesis is an explanation of a relationship between two or more variables.

What is an experimental hypothesis?

An experimental hypothesis is a tentative explanation that predicts the effect of an independent variable on a dependent variable.

What is a nonexperimental hypothesis?

A nonexperimental hypothesis predicts how variables (events, traits, or behaviors) might be correlated or related, but not necessarily causally.

A hypothesis must be a synthetic statement, meaning it must be capable of being either true or false.

<p>True (A)</p> Signup and view all the answers

What is testability in the context of an experimental hypothesis, and why is it important?

<p>Testability means an experimental hypothesis can be assessed by manipulating an independent variable (IV) and measuring the results on the dependent variable (DV). It is important because without testability, we cannot evaluate the validity of the hypothesis.</p> Signup and view all the answers

What does it mean for a hypothesis to be parsimonious, and why is this preferred?

<p>Parsimony means that we prefer a simple hypothesis over one requiring many supporting assumptions.</p> Signup and view all the answers

Explain the inductive model of formulating a hypothesis.

<p>Induction is reasoning from specific cases or observations to generate broader general principles or hypotheses.</p> Signup and view all the answers

Explain the deductive model of formulating a hypothesis.

<p>Deduction is reasoning from general principles (like a theory) to make specific, testable predictions (hypotheses).</p> Signup and view all the answers

How can researchers combine induction and deduction to develop and test theories?

<p>Researchers can use induction to develop propositions or theories based on specific observations, and then use deduction to make specific predictions (hypotheses) from that theory, which are then tested empirically.</p> Signup and view all the answers

What is often considered the most useful way to develop a testable hypothesis?

<p>Reviewing research that has already been published.</p> Signup and view all the answers

List three ways a review of prior experiments helps develop a hypothesis.

<p>Any three of the following: 1. Identifies questions not yet conclusively answered. 2. Suggests new hypotheses. 3. Identifies potential mediating variables. 4. Highlights problems encountered by other researchers. 5. Helps avoid unnecessary duplication of research (unless replication is intended).</p> Signup and view all the answers

How can serendipity lead to a fruitful hypothesis?

<p>Serendipity involves making unexpected discoveries. A scientist who is open to unexpected results and sufficiently informed can recognize the significance of accidental findings and develop new hypotheses from them.</p> Signup and view all the answers

What is intuition in the context of research?

<p>Intuition is described as knowing without reasoning or as unconscious problem-solving.</p> Signup and view all the answers

What is the primary purpose of the Introduction section in an APA-format paper?

<p>The Introduction provides a selective review of research findings related to the research hypothesis, identifies questions not definitively answered by previous studies, and explains how the current experiment advances knowledge in the area.</p> Signup and view all the answers

What is a meta-analysis and what information can it provide?

<p>A meta-analysis is a statistical analysis, not an experiment itself. It analyzes the results of many similar studies to measure the average effect size of an independent variable across those studies.</p> Signup and view all the answers

What is an independent variable (IV) in an experiment?

<p>An independent variable is the variable (antecedent condition) that an experimenter intentionally manipulates to observe its effect.</p> Signup and view all the answers

What does it mean for an experiment to be confounded?

<p>An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable.</p> Signup and view all the answers

What is a dependent variable (DV) in an experiment?

<p>A dependent variable is the outcome measure the experimenter uses to assess the change in behavior produced by manipulating the independent variable.</p> Signup and view all the answers

What is an operational definition?

<p>An operational definition specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements.</p> Signup and view all the answers

Differentiate between an experimental and a measured operational definition.

<p>An experimental operational definition specifies the exact procedure for creating the different values (levels) of the independent variable. A measured operational definition specifies the exact procedure for measuring the dependent variable.</p> Signup and view all the answers

Match the scale of measurement with its description.

<p>Nominal Scale = Assigns items to distinct named categories without measuring magnitude. Ordinal Scale = Measures magnitude using ranks, but intervals between ranks may not be equal. Interval Scale = Measures magnitude with equal intervals between values, but no absolute zero. Ratio Scale = Measures magnitude with equal intervals and an absolute zero point.</p> Signup and view all the answers

What is reliability in measurement?

<p>Reliability refers to the consistency of experimental operational definitions and measured operational definitions.</p> Signup and view all the answers

Define interrater reliability.

<p>Interrater reliability is the degree to which different observers agree in their measurement or scoring of the same behavior.</p> Signup and view all the answers

Define test-retest reliability.

<p>Test-retest reliability is the degree to which a person's scores are consistent across two or more administrations of the same measurement procedure.</p> Signup and view all the answers

Define interitem reliability.

<p>Interitem reliability measures the degree to which different parts of an instrument (like items on a questionnaire or test) that are designed to measure the same variable achieve consistent results.</p> Signup and view all the answers

What is validity in the context of experimental research?

<p>Validity means that an operational definition accurately manipulates the intended independent variable or accurately measures the intended dependent variable.</p> Signup and view all the answers

What is face validity?

<p>Face validity is the degree to which a manipulation or measurement technique appears, on the surface, to be valid. It's based on self-evidence.</p> Signup and view all the answers

What is content validity?

<p>Content validity refers to how accurately a measurement procedure samples the full content domain of the variable being measured.</p> Signup and view all the answers

What is predictive validity?

<p>Predictive validity assesses how accurately a measurement procedure predicts future performance or behavior.</p> Signup and view all the answers

What is construct validity?

<p>Construct validity refers to how accurately an operational definition represents the underlying theoretical construct it is intended to measure or manipulate.</p> Signup and view all the answers

What is internal validity?

<p>Internal validity is the degree to which changes observed in the dependent variable across different treatment conditions were actually caused by the independent variable, rather than by extraneous factors.</p> Signup and view all the answers

When does the problem of confounding occur?

<p>Confounding occurs when an extraneous variable systematically changes across the different experimental conditions along with the independent variable.</p> Signup and view all the answers

What is a history threat to internal validity?

<p>A history threat occurs when an event outside the experiment occurs during the study and affects the dependent variable, potentially confounding the results.</p> Signup and view all the answers

What is a maturation threat to internal validity?

<p>A maturation threat occurs when physical or psychological changes within the subjects over time (e.g., fatigue, boredom, growth) affect the dependent variable, threatening internal validity.</p> Signup and view all the answers

What is a testing threat to internal validity?

<p>A testing threat occurs when prior exposure to a measurement procedure (e.g., taking a pretest) affects performance on that same measure during later stages of the experiment.</p> Signup and view all the answers

What is an instrumentation threat to internal validity?

<p>An instrumentation threat occurs when changes in the measurement instrument or measuring procedure itself over the course of an experiment threaten internal validity.</p> Signup and view all the answers

What is a statistical regression threat (regression toward the mean)?

<p>A statistical regression threat occurs when subjects are assigned to conditions based on extreme scores, and their scores naturally tend to move closer to the average upon retesting, independent of any treatment effect.</p> Signup and view all the answers

What is a selection threat to internal validity?

<p>A selection threat occurs when individual differences between subjects are not balanced across treatment conditions due to faulty assignment procedures.</p> Signup and view all the answers

What is a subject mortality (attrition) threat?

<p>A subject mortality or attrition threat occurs when subjects drop out of experimental conditions at different rates, potentially making the groups non-equivalent on the dependent variable or other characteristics.</p> Signup and view all the answers

What are selection interactions?

<p>Selection interactions occur when a selection threat combines with at least one other threat (like history, maturation, or mortality), further complicating the interpretation of results.</p> Signup and view all the answers

What is the purpose of the Method section of an APA research report?

<p>The Method section describes the Participants, Apparatus or Materials, and Procedure of the experiment in sufficient detail to allow another researcher to exactly replicate the study.</p> Signup and view all the answers

When is an Apparatus section needed in the Method section?

<p>An Apparatus section is appropriate when the equipment used was unique, specialized, or requires detailed explanation for the reader to understand its capabilities and evaluate or replicate the experiment.</p> Signup and view all the answers

What are physical variables in an experiment?

<p>Physical variables are aspects of the testing situation itself that need to be controlled, such as the time of day, experimental room characteristics, or lighting.</p> Signup and view all the answers

What is elimination as a technique for controlling extraneous variables?

<p>Elimination involves completely removing extraneous physical variables from the experimental situation.</p> Signup and view all the answers

How does constancy of conditions work to control extraneous variables?

<p>Constancy of conditions controls extraneous physical variables by keeping all aspects of the treatment conditions identical for all subjects, except for the manipulation of the independent variable.</p> Signup and view all the answers

How does balancing work to control extraneous variables?

<p>Balancing controls extraneous physical variables by equally distributing their effects across the different treatment conditions.</p> Signup and view all the answers

What is the recommended order for using control techniques (elimination, constancy, balancing)?

<ol> <li>Eliminate extraneous variables whenever possible. 2. Keep conditions constant where elimination is not possible. 3. Balance the effects of extraneous variables when constancy of conditions is not possible.</li> </ol> Signup and view all the answers

What are social variables in experimental research?

<p>Social variables are aspects of the relationships between subjects and experimenters that can influence experimental results.</p> Signup and view all the answers

Explain demand characteristics.

<p>Demand characteristics are cues within the experimental situation that might signal to participants how they are expected to behave or respond, potentially influencing their behavior.</p> Signup and view all the answers

How can demand characteristics threaten internal validity?

<p>Demand characteristics can confound an experiment if they vary systematically across experimental conditions, leading subjects to act in ways that confirm what they believe to be the experimental hypothesis.</p> Signup and view all the answers

What is a single-blind experiment?

<p>In a single-blind experiment, the subjects are not told which treatment condition they are in.</p> Signup and view all the answers

What is the placebo effect?

<p>The placebo effect occurs when a subject receives an inert treatment (a placebo) but shows improvement in their condition simply because they expect the treatment to work.</p> Signup and view all the answers

What is a cover story and how does it help control demand characteristics?

<p>A cover story is a false but plausible explanation of the experimental procedures used to disguise the actual research hypothesis from the subjects.</p> Signup and view all the answers

What is experimenter bias?

<p>Experimenter bias is any behavior by the experimenter (e.g., unintentional differences in attention, treatment administration, or data recording) that can confound the experiment by influencing subject behavior or results differently across conditions.</p> Signup and view all the answers

What is the Rosenthal effect (or Pygmalion effect)?

<p>The Rosenthal effect (also called Pygmalion effect or self-fulfilling prophecy) is the phenomenon where an experimenter's expectations about subjects lead them to treat subjects differently, and these actions influence the subjects' performance in line with the expectations.</p> Signup and view all the answers

Why is a double-blind design superior to a single-blind design in controlling potential biases?

<p>A double-blind design controls both demand characteristics (because subjects are blinded) and experimenter bias (because the experimenter administering the treatment and collecting data is also blinded to the subjects' conditions). Single-blind only controls demand characteristics.</p> Signup and view all the answers

How might an experimenter's personality affect experimental results?

<p>Research suggests experimenter personality can influence subject performance. Warm and friendly experimenters may elicit better performance and cooperation, while hostile or authoritarian experimenters might elicit inferior performance.</p> Signup and view all the answers

List two ways experimenters can control for the effects of their personality variables.

<p>Any two of: 1. Use multiple experimenters and balance their use across conditions. 2. Treat 'experimenter' as an independent variable in the analysis to check for interactions. 3. Minimize face-to-face contact between experimenter and subject. 4. Strictly follow a script. 5. Videotape sessions to ensure consistency.</p> Signup and view all the answers

How do volunteers often differ from non-volunteers who could participate in research?

<p>Volunteers tend to be more sociable, score higher in social desirability, hold more liberal social and political attitudes, be less authoritarian, and score higher on intelligence tests compared to nonvolunteers.</p> Signup and view all the answers

How might allowing subjects to select the experiment they participate in threaten validity?

<p>If subjects can choose experiments based on appealing titles (e.g., &quot;Heavy Metal Music Experiment&quot; vs. &quot;Memory Test Experiment&quot;), it could result in biased samples participating in different studies, threatening external validity.</p> Signup and view all the answers

Why is it generally advised not to run friends in your experiment?

<p>Selecting friends might bias your sample (selection bias), threatening external validity. Furthermore, both you and your friends might behave differently in the experimental context than you would with strangers, potentially affecting internal validity.</p> Signup and view all the answers

What does the 'folklore' about subjects suggest regarding those who sign up late versus early in the semester?

<p>The folklore suggests that subjects who sign up late in the semester may be less motivated or behave differently than those who sign up earlier.</p> Signup and view all the answers

What is the purpose of an experimental design?

<p>The design of an experiment details the experimenter's plan or structure for testing a hypothesis. It is the experiment's 'floor plan,' not its specific content.</p> Signup and view all the answers

What three main factors determine the selection of an experimental design?

<ol> <li>The number of independent variables in the hypothesis. 2. The number of treatment conditions needed to fairly test the hypothesis. 3. Whether the same subjects are used in each treatment condition (within-subjects vs. between-subjects).</li> </ol> Signup and view all the answers

What defines a between-subjects design?

<p>In a between-subjects design, each subject participates in only one condition of the experiment.</p> Signup and view all the answers

What determines whether we can generalize our findings to a larger population?

<p>The representativeness of our sample determines generalizability. How well the sample reflects the characteristics of the entire population from which it was drawn dictates the extent to which results can be generalized.</p> Signup and view all the answers

What is a common rule of thumb for the minimum number of subjects needed in each treatment condition for a between-subjects design?

<p>A common guideline is to have 10-20 subjects in each treatment condition.</p> Signup and view all the answers

What is effect size and why is it important?

<p>Effect size is a statistical estimate of the size or magnitude of a treatment effect (the strength of the relationship between the IV and DV). It's important because it indicates practical significance and influences the number of subjects needed to detect an effect.</p> Signup and view all the answers

How do researchers typically determine the number of subjects required for an experiment based on effect size?

<p>Researchers determine the required number of subjects based on the expected effect size (often estimated from prior research) using tools like power charts or specialized software programs.</p> Signup and view all the answers

What is a two-group design?

<p>A two-group design involves creating two separate groups of subjects to compare two treatment conditions.</p> Signup and view all the answers

Describe a two independent groups design.

<p>A two independent groups design involves one independent variable with two levels, where subjects are randomly assigned to one of the two treatment conditions.</p> Signup and view all the answers

Why do researchers use random assignment in independent groups designs?

<p>Random assignment is used to assign subjects to conditions such that each subject has an equal chance of being in any condition. Its purpose is to equally distribute subject variables (individual differences) between the groups to prevent them from confounding the experiment.</p> Signup and view all the answers

How do experimental and control conditions typically differ?

<p>The experimental condition presents a specific, non-zero value (level) of the independent variable. The control condition typically presents a zero level of the independent variable or provides a baseline for comparison.</p> Signup and view all the answers

Describe an experimental group-control group design.

<p>This is a type of two-independent groups design where one group (experimental) receives a specific level of the IV, and the other group (control) receives the same procedures but no treatment (or a zero level of the IV).</p> Signup and view all the answers

Describe a two experimental groups design.

<p>In a two experimental groups design (a type of two-independent groups design), subjects are assigned to one of two different, non-zero levels of the independent variable. There is no traditional 'control' group.</p> Signup and view all the answers

What limits the effectiveness of random assignment, especially in smaller groups?

<p>Random assignment works less effectively with small numbers of subjects (e.g., 5-10 per condition). Chance alone can still lead to groups differing significantly on subject variables that could confound the experiment.</p> Signup and view all the answers

What is a two matched groups design?

<p>A two matched groups design involves two groups of subjects where participants are first matched into pairs based on a subject variable correlated with the dependent variable. Then, members of each pair are randomly assigned to one of the two treatment conditions.</p> Signup and view all the answers

What is the purpose of matching subjects in a two matched groups design?

<p>The purpose of matching is to create groups that are equivalent on potentially confounding subject variables that are strongly correlated with the dependent variable.</p> Signup and view all the answers

Explain how you would match subjects on IQ using precision matching.

<p>With precision matching, you would form pairs of subjects who have identical IQ scores (e.g., a pair of subjects both with an IQ of 120). Then, randomly assign one member of each pair to treatment condition 1 and the other to treatment condition 2.</p> Signup and view all the answers

When should a researcher use a two matched groups design?

<p>A two matched groups design should be used when there are two levels of the independent variable and there is an extraneous subject variable known to be correlated with the dependent variable that the researcher can measure.</p> Signup and view all the answers

What is a multiple groups design?

<p>A multiple groups design is a between-subjects design that includes more than two levels of an independent variable.</p> Signup and view all the answers

What is a multiple independent groups design?

<p>A multiple independent groups design is a specific type of multiple groups design where subjects are randomly assigned to one of the three or more treatment conditions.</p> Signup and view all the answers

What is block randomization and what does it guarantee?

<p>Block randomization is a process for randomly assigning subjects to conditions in blocks, where each block contains one instance of each condition. It guarantees that an equal number of subjects are assigned to each treatment condition.</p> Signup and view all the answers

What factors should a researcher consider when choosing the number of treatments (levels of the IV)?

<p>The hypothesis being tested, findings from prior research, results from pilot studies, and practical limitations (like time, cost, available subjects) all help determine the optimal number of treatments.</p> Signup and view all the answers

What are some practical limitations on the number of treatments a researcher can include?

<p>Practical limitations include the number of available subjects (more conditions require more subjects overall in between-subjects designs), the time available for conducting the experiment, and the expense involved.</p> Signup and view all the answers

What is a pilot study?

<p>A pilot study is a small-scale trial run of an experiment conducted before the main study, using only a few subjects.</p> Signup and view all the answers

List three things a pilot study can help reveal.

<p>Any three of: whether sufficient time has been allocated, if instructions are clear, if any deception worked as intended, whether manipulations were effective, if measures are appropriate, or if additional treatment conditions might be needed.</p> Signup and view all the answers

Flashcards

Quasi-experiments

Superficially resemble experiments but lack required manipulation or random assignment.

Linearity (Correlation)

The degree to which X and Y can be plotted as a line or curve.

Sign (Correlation)

Whether the correlation coefficient is positive or negative.

Magnitude (Correlation)

The strength of the correlation coefficient, ranging from -1 to +1.

Signup and view all the flashcards

Probability (Correlation)

The likelihood of obtaining a correlation coefficient of a certain magnitude due to chance.

Signup and view all the flashcards

Coefficient of determination

Estimates the amount of variability that can be explained by a predictor variable.

Signup and view all the flashcards

Correlation and Causation

Since correlational studies do not create multiple levels of an independent variable and randomly assign subjects to conditions, they cannot establish causal relationships.

Signup and view all the flashcards

Cross-lagged panel design

Measuring relationships over time to suggest a causal path.

Signup and view all the flashcards

Ex-post facto design

Examines effects of already existing subject variables without manipulation.

Signup and view all the flashcards

Nonequivalent groups design

Compares the effects of treatments on pre-existing groups of subjects.

Signup and view all the flashcards

Pretest/posttest design

Measures behavior before and after an event without a control condition.

Signup and view all the flashcards

Hypothesis

A tentative explanation of a relationship between two or more variables.

Signup and view all the flashcards

Operational definition

Specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements.

Signup and view all the flashcards

Nominal scale

assigns items to two or more distinct categories that can be named using a shared feature, but does not measure their magnitude.

Signup and view all the flashcards

Ordinal scale

Measures the magnitude of the dependent variable using ranks, but does not assign precise values.

Signup and view all the flashcards

Interval scale

Measures the magnitude of the dependent variable using equal intervals between values with no absolute zero point.

Signup and view all the flashcards

Confounding

An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable.

Signup and view all the flashcards

Demand characteristics

Demand characteristics are cues within the experimental situation that demand or elicit specific participant responses.

Signup and view all the flashcards

Placebo effect

The placebo effect is when a subject receives an inert treatment and improves because of positive expectancies.

Signup and view all the flashcards

Rosenthal effect

The Rosenthal effect is the phenomenon in which experimenters treat subjects differently based on their expectations and their resulting actions influence subject performance.

Signup and view all the flashcards

Study Notes

  • Chapter 5 discusses correlational and quasi-experimental designs in experimental psychology.

Quasi-Experiments vs. Actual Experiments

  • Quasi-experiments resemble real experiments, but without manipulating antecedent conditions and/or random assignment.
  • These studies look at effects of pre-existing conditions or subject characteristics on behavior.
  • For instance, an examination the incidence of Alzheimer's in ibuprofen users versus non-users after age 50 could be a quasi-experiment.
  • Experiments involve researchers assigning subjects to conditions they create.
  • Quasi-experiments are suitable when antecedent conditions should not or cannot be manipulated.
  • Studying the impact of spousal abuse on child abuse frequency is an example where quasi-experiments are appropriate.

Properties of Correlation

  • Pearson correlation coefficients assess simple correlations, shown as r (50) = +.70, p = .001.
  • Correlation coefficients are described by linearity, sign, magnitude, and probability.
  • Linearity indicates if the relationship plots as a line or a curve.
  • Sign indicates if the correlation is positive or negative.
  • Magnitude measures the strength, ranging from -1 to +1.
  • Probability denotes the likelihood of obtaining the observed magnitude by chance.

Scatterplots

  • Scatterplots are a graphic display of data point pairs on x and y axes.
  • Scatterplots depict correlation's linearity, sign, magnitude, and probability.

Range Truncation

  • Range truncation artificially restricts X and Y ranges, reducing correlation strength.

Outliers

  • Outliers are extreme scores affecting data trends and correlations
  • Range truncation eliminates outliers.

Coefficient of Determination

  • The coefficient of determination (r²) estimates a predictor variable's variability amount.
  • Handshake firmness accounted for 31% of first impression positivity.

Correlation vs. Causation

  • Correlational studies do not establish causation since there are no manipulated independent variables or random assignment.
  • Three reasons correlations cannot prove causation:
  • Casual direction
  • Bidirectional causation
  • The third variable problem
  • Causal direction occurs because correlation is symmetrical, and B may as easily cause A as A causes B.
  • Bidirectional causation is where two variables affect one another.
  • The third variable problem occurs when a third variable creates the appearance of a relationship between the other two.

Multiple Correlation (R)

  • Multiple correlation (R) is used to determine relationships among three or more variables.
  • Age, TV watching, and vocabulary were measured, with an R of +.61.

Partial Correlation

  • Partial correlation involves keeping one variable constant to assess its influence on the correlation between two others.
  • Age can be held constant to measure how television viewing affects vocabulary.

Multiple Regression

  • Multiple regression predicts behavior using scores from multiple variables.
  • Estimating vocabulary by inputting age and TV watching habits as predictor variables.

Causal Modeling

  • Causal modeling creates and tests models suggesting cause-and-effect relationships.
  • Path analysis and cross-lagged panel designs are forms of causal modeling.

Path Analysis

  • Path analysis creates and test models of causal sequences via multiple regression.

Cross-Lagged Panel Design

  • In cross-lagged panel design, relationships measured over time suggest a causal path.

Ex Post Facto Design

  • Ex post facto designs examine effects of pre-existing subject variables without manipulating them.

Non-Equivalent Groups Design

  • Non-equivalent groups design compares treatment effects on pre-existing groups.
  • In Company A fluorescent lighting is installed, productivity is then compared to incandescent lighting in Company B.

Longitudinal vs. Cross-Sectional Approaches

  • Longitudinal study measures the same subjects at different times to see the effect of time.
  • Cross-sectional studies compare different developmental stages or classes simultaneously.

Pretest-Posttest Design

  • Pretest/posttest designs measure behavior before and after an event.
  • This is said to be quasi-experimental, as there is no control condition.
  • Practice GRE test 1, six-week prep course, Practice GRE test 2 exemplifies this.

Internal Validity Problems with Pretest-Posttest Design

  • Lacks a control group that receives a different IV level or no preparation course.
  • Practice effects (pretest sensitization) may confound the results due to reduced anxiety and learning from pretest answers.

Solomon 4-Group Design

  • The Solomon 4-group design includes these conditions:
  • Receives pretest, treatment, and posttest
  • Receives pretest and posttest only
  • Receives treatment and posttest only
  • Receives posttest only

Hypothesis Definition

  • Hypotheses explain relationships between two or more variables.

Experimental Hypotheses

  • Experimental hypotheses tentatively explain an event or a behavior.
  • It predicts the impact of an independent variable on a dependent variable.
  • Cognitive behavior therapy (CBT) produces less relapse than antidepressants, for instance.

Non-Experimental Hypotheses

  • Nonexperimental hypotheses predict how variables might correlate, but not establish causation.
  • Red-haired patients receive less pain relief from medication than blonde patients is an example.

Synthetic Statements

  • A hypothesis must be a synthetic statement, capable of being true or false.
  • Examining the effects of morning meals on student reading ability is an example.
  • "Hungry students read slowly."

Testability

  • An experimental hypothesis is testable via manipulating an IV and measuring the DV results.

Parsimony

  • Parsimoniousness means a simpler hypothesis is preferred over a complex one.

Intuition

  • Intuition should be guided by literature review.

Helpful Strategies for Developing Hypotheses

  • Helpful strategies: read a psychology journal, observe people in public, and identify real-world problems with causes.

APA-Format Paper Introduction

  • The introduction selects relevant research related to the hypothesis.
  • It shows how the study furthers knowledge by addressing unanswered questions.

Value of Meta-Analysis

  • Provides helpful information on the topic.
  • It is not an experiment, however it is a statistical analysis that includes many studies.
  • Measures the average effect size of an independent variable across studies with similar methods.
  • Establishes strength and external validity of causal relationships.

Independent Variables

  • Independent variable (IV) is the variable or antecedent condition intentionally manipulated by an experimenter.
  • Independent variable levels are experimenter-created values of the IV.
  • Experiments require at least two levels.

Confounding Explained

  • An experiment is confounded when an extraneous variable value changes with the independent variable.
  • Experimental subjects in the morning and control subjects at night exemplify this.

Dependent Variables

  • Dependent variables measure outcome; the experimenter measures the change in behavior produced by the independent variable.
  • The value of the dependent variable depends on the independent variable value.

Operational Definitions

  • Operational definitions specify a variable's experimental meaning.
  • It is defined in terms of observable operations, procedures, and measurements.

Experimental Operational Definitions

  • Experimental operational definitions specify the procedure to create independent variable values.

Measured Operational Definitions

  • Measured operational definitions specify the procedure to measure the dependent variable.

Types of Scales

  • Nominal scales assign items to categories with shared features, without measuring magnitude.
  • Sorting animals into friendly and shy categories.
  • Ordinal scales measure magnitude using ranks without precise values.
  • Interval scales measure magnitude with equal intervals but lack absolute zero.
    • Degrees Celsius or Fahrenheit.
    • Sarnoff and Zimbardo's (1961) 0-100 scale.
  • Ratio scales measure magnitude with absolute zero and equal intervals.
    • Distance in meters or time in seconds.

Reliability

  • Reliability indicates consistency of experimental/measured operational definitions.
  • An accurate bathroom scale is an example.

Interrater Reliability

  • Interrater reliability measures how much observers agree when measuring behavior.
  • Example: agreement among three observers when scoring personal essays for optimism.

Test-Retest Reliability

  • Test-retest reliability is how consistent a person's scores are across multiple administrations.
  • Wechsler Adult Intelligence Scale-Revised exhibits highly correlated scores when administered twice, two weeks apart.

Interitem Reliability

  • Interitem reliability indicates consistency across different instrument parts that measure the same variable.

Validity

  • Validity means an operational definition accurately manipulates the independent variable or measures the dependent variable.

Face Validity

  • Face validity judges if a manipulation/measurement technique has self-evident validity.
  • Using a ruler to measure pupil size.

Content Validity

  • Content validity indicates how well a measurement samples the content of the dependent variable.
  • When an exam only contains questions on chapter 2 when the exam is supposed to be over chapters 1-4, it has poor content validity.

Predictive Validity

  • Predictive validity measures how well a procedure predicts future performance.
  • ACT scores correlating with college GPA.

Construct Validity

  • Construct validity is how well an operational definition represents a construct.
  • Parent constructs that include perceptions of others as unfriendly.

Internal Validity

  • Internal validity means the changes in the dependent variable came from the experimental conditions.
  • Establishes a cause-and-effect relationship.

Confounding

  • This occurs when extraneous conditions systematically change across the experimental conditions.
  • Studying meditation and prayer effects on blood pressure becomes confounded if one group exercises more.

History Threat

  • Occurs when outside events threaten validity via altering the dependent variable.
  • Measuring group A before lunch and group B after lunch.

Maturation Threat

  • Comes when subjects' physical or psychological transformations threaten validity by changing the DV.
  • Boredom increasing subject errors.

Testing Threat

  • Threat comes when exposure to the testing affects performance.

Instrumentation Threat

  • Instrumentation threat occurs when changes in the measurement instrument affect internal validity.
  • Reaction time that becomes less accurate.

Statistical Regression Threat

  • Occurs when conditions are assigned via extreme scores, procedure is unreliable, and subjects are retested.

Selection Threat

  • Selection threat occurs when individuals are assigned to conditions by experimental assignments.

Subject Mortality Threat

  • Subject mortality happens when subjects drop from experimental conditions at different rates.

Selection Interactions

  • Selection threats interact with history, maturation, statistical regression, subject mortality, or testing.

Method Section Purpose

  • Method details experiment’s Participants, Apparatus/Materials, and Procedure.
  • Readers get sufficient detail to exactly replicate the experiment.

When to Use Apparatus Section

  • Apparatus is needed with unique specialized equipment, or when capabilities of common equipment must be explained.

Physical Variables

  • Day of week, experimental room, and lighting are aspects of the situation that should be controlled.

Elimination

  • Elimination fully removes extraneous physical variables from the situation.

Constancy of Conditions

  • Constancy controls extraneous physical variables by keeping treatment conditions identical, except the IV.

Balancing

  • Balancing distributes extraneous physical variable effects across treatment conditions.
  • It reduces the impact of variables that cannot be eliminated.

Ordered Techniques

  • Eliminate, keep constant where elimination is not possible, or balance social conditions where constancy is not possible.

Social Variables

  • Social variables that influence experimental results include demand characteristics and experimenter bias.

Demand Characteristics Explained

  • Demand characteristics are the experimental situation cues that elicit specific participant responses.
  • Students try and cue professors to end the lectures by packing up.

Threat to Internal Validity

  • Demand characteristics can confound experiments by varying across experimental conditions.
  • Subjects may even act to try and confirm the hypothesis.

Single-Blind Experiments

  • These do not tell subjects the treatment condition.
  • A single-blind drug study capsules look and taste identical.

Why Use Single-Blind Experiments

  • Treatment conditions eliminate cues that might alter behavior.

Placebo Effect

  • Placebo occurs when a participant improves from a treatment they received.

Demand Characteristics Controlled

  • Cover stories uses false or plausible explanations to disguise what the hypothesis is.
  • The explanation must be used scarcely sincethey are form of deception.

Experimenter Bias

  • Any behavior from the experimenter that could confound the experiment.
  • Researchers can also provide more attention to subjects in one of the conditions than in another.

Rosenthal Effect

  • Rosenthal Effect occurs when experimenters who treat subjects based on their resulting influenced subject performance
  • It is the Pygmalion effect of self-fulfilling prophecy.
  • By paying more attention and giving more feedback to high aptitude student is a key example.

Minimize Experimenter Bias

  • Since subjects are blinded, single-blind designs only control single bias.
  • Double-blind experiments control demand characteristics and experimenter bias because the subjects and experimenter are blinded

How Personality Affects Experimental Results

  • Warm and friendly equals more pleasant results.
  • If they are hostile this means there will be an inferior subject performance.

Control Personality Variables

  • Multiple experimenters balance the amount of the test subjects.
  • Statistical experimenters are used with independent variables.
  • There must an interaction that has been confounded with single experimenters to the follow contact minimizes.

Volunteers vs. Non-Volunteers

  • Non-volunteers score poorly while volunteers sore higher
  • Political attitudes are highly regarded and are less authoritarian.

Context Variables

  • There are variables that score less on intelligence tests from nonvolunteers.

Select the Experiment

  • Allows the subjects to sign up to get the results of the heavy metal experiment.

Folklore Summary

  • Select the experiment that relies to depend on the folklore about the subjects.
  • It can affect the students results that have similar signs.

Experimental Design Purpose

  • To test the hypothesis, an experimental design experiments a plan.
  • You can apply designs to investigation even if there are different hypotheses.

Key Factors

  • Hypothesis for factors is determined by experimental design from three factors.
  • Independent variable that has a number
  • All results are subjective from experimental conditions

Between-Subjects Design

  • Only participates if a subject in between-subject designs.

Generalize the Findings

  • Sample determinations can generalized results from population.
  • Validity increases when sampling at random.

Group Minimum Subjects

  • Subjects has its conditions should be detected as a treatment with effects.

Effect Size Importance

  • Treatment with effects is estimated statistically.
  • Relationships could get stronger if the variables are independent.

Sample Sizes and Effects Size

  • Determines what factors require to experiments, by requiring researchers or programs.

Two-Group Design

  • Involves two or more separate subjects.
  • Includes versions of this design that can be controlled.

Independent Designs for Research

  • Subjects are randomly assigned with multiple levels for one of the conditions.
  • Two and independent groups were designed with control groups.

Random Assignment

  • Equalizing conditions the can experiment to each other will prevent the study from being controlled.

Conditions Differ

  • Independent conditions can present values with variable tests.
  • One level receives the same procedure while variable independent has a zero
  • Independent can experimental a treatment or procedure.

Two Designs Discussed

  • This experiment can assign multiple level with multiple variables.
  • Designs requires that the right variables be distributed.

Assign Subjects

  • Conditions for effectiveness limit assign what's needed.
  • Multiple people control it by their variables.

Two Groups Discussed

  • If conditions are determined randomize the what can occur.

Matching Explained

  • Pairs that can be formed with identical forms.

Two Matched Test

  • Can be randomly assigned with the right conditions.

Block Randomization

  • Equal multiple are assigned from randomly.
  • Conditions are randomly assigned to a assigned test with determined blocks.

Researcher

  • Pilot studies or researchers can pre determine what needs to be done.

Experimental

  • Number of treat,nets will gain from extra results.

Practical Limitations

  • Test that will determine what can be promising.

Pilot Study

  • Trials that can determine if an experiment may work.
  • Multiple experiments may be refined if there is help.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Research Methods in Psychology
42 questions
ANCOVAs and Correlations in Psychology
96 questions
Use Quizgecko on...
Browser
Browser