Psychology Research Methods

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is a frequency claim?

A claim that describes the rate or degree of a single variable, often expressed as a percentage.

What is an association claim?

A claim stating that one variable is correlated or related to another variable.

What is a causal claim?

A claim arguing that one variable is responsible for causing a change in another variable.

What is random assignment in research?

<p>A procedure where each participant has an equal probability of being assigned to any of the experimental conditions or groups.</p> Signup and view all the answers

What question does covariance address in the context of establishing causality?

<p>Covariance addresses whether variable A is related to variable B.</p> Signup and view all the answers

What is temporal precedence?

<p>The criterion for establishing causality which states that the proposed causal variable (A) must occur or change before the proposed outcome variable (B).</p> Signup and view all the answers

Define internal validity.

<p>The extent to which a study can confidently conclude that the independent variable (A) caused the observed change in the dependent variable (B), with no alternative explanations (confounds) for the outcome.</p> Signup and view all the answers

How is the covariance rule typically established in an experiment?

<p>It is established by demonstrating a difference between the means of the experimental groups (including a comparison or control group).</p> Signup and view all the answers

According to the temporal precedence rule for establishing causality, the _____ must be manipulated before the _____ is measured.

<p>Independent Variable (IV), Dependent Variable (DV)</p> Signup and view all the answers

What are two key elements used to satisfy the internal validity rule in an experiment?

<p>Including control variables in the study design and using random assignment to conditions.</p> Signup and view all the answers

What is systematic variance, and why is it problematic in experiments?

<p>Systematic variance occurs when another variable (potential confound) fluctuates predictably along with the levels of the independent variable. It is problematic because it introduces confounds, making it difficult to determine if the IV or the other variable caused changes in the DV.</p> Signup and view all the answers

Define unsystematic variance and explain why it typically does not lead to confounds.

<p>Unsystematic variance occurs when another variable fluctuates randomly across all levels of the independent variable. It generally does not lead to confounds because its influence is spread out across groups, rather than being concentrated in one condition.</p> Signup and view all the answers

What is the term for a variable that is inadvertently missed during experimental control and could potentially affect the outcome?

<p>Confound variable (or simply confound).</p> Signup and view all the answers

How does a confound variable relate to the independent variable, and what type of validity does it threaten?

<p>A confound variable is an extraneous variable that varies systematically with the independent variable. It poses a significant threat to internal validity.</p> Signup and view all the answers

What is a design confound? Provide an example.

<p>A design confound occurs when the experimental setup itself allows another variable to vary systematically along with the independent variable. For example, if the tests given to the group using laptops were unintentionally harder than the tests given to the group writing notes by hand.</p> Signup and view all the answers

Explain what a selection effect is in experimental research.

<p>A selection effect occurs when the characteristics of participants in one level of the independent variable systematically differ from the characteristics of participants in another level.</p> Signup and view all the answers

What is the primary method for correcting design confounds?

<p>Using control variables to ensure that there are no systematic differences between conditions other than the levels of the independent variable.</p> Signup and view all the answers

What are the two main techniques used to correct for selection effects?

<p>Random assignment or using matched groups.</p> Signup and view all the answers

Describe an independent-groups design (also known as a between-subjects design).

<p>An experimental design where each participant is assigned to only one level of the independent variable. Data can be collected using a post-test only or both a pre-test and a post-test.</p> Signup and view all the answers

Describe a within-groups design (also known as a within-subjects design).

<p>An experimental design where each participant experiences all levels of the independent variable. This can be implemented using concurrent measures or repeated measures.</p> Signup and view all the answers

What are concurrent measures in a within-groups design?

<p>A type of within-groups design where participants are exposed to all levels of the independent variable at roughly the same time, and a single outcome measure reflects their preference or response.</p> Signup and view all the answers

What is a repeated measures design?

<p>A type of within-groups design where participants are measured on the dependent variable after exposure to each level of the independent variable, typically experienced sequentially.</p> Signup and view all the answers

What are order effects in within-groups designs?

<p>Order effects occur when exposure to one level of the independent variable influences responses to subsequent levels. They are a type of design confound specific to within-groups designs.</p> Signup and view all the answers

How can researchers correct for order effects in within-groups designs?

<p>By using counterbalancing, which involves presenting the levels of the independent variable to participants in different sequences. The order of completion is typically randomly assigned.</p> Signup and view all the answers

What is attrition in research?

<p>Attrition occurs when participants drop out of a study before it is completed.</p> Signup and view all the answers

How can researchers address the potential problem of attrition?

<p>One way is to compare the pretest scores (if available) of participants who dropped out with those who completed the study to check for systematic differences. Often, researchers may need to exclude participants with incomplete data from the final analysis.</p> Signup and view all the answers

What are testing effects (also known as practice effects)?

<p>Testing effects refer to changes in participants' scores on a test or measure simply because they have taken it more than once, leading to improvement (practice) or worsening (fatigue/boredom).</p> Signup and view all the answers

What is an instrumentation threat to internal validity?

<p>An instrumentation threat occurs when the instrument or method used to measure the dependent variable changes over the course of the study.</p> Signup and view all the answers

How can instrumentation threats be corrected or minimized?

<p>By using a post-test only design (if appropriate), ensuring instruments are calibrated and used consistently, training observers thoroughly, or using counterbalancing if different forms of a test are necessary.</p> Signup and view all the answers

What are demand characteristics?

<p>Cues or features of a study that suggest to participants what the purpose and hypothesis are, leading them to potentially alter their behavior to conform to these expectations.</p> Signup and view all the answers

Explain observer bias.

<p>Observer bias occurs when the researcher's own expectations or hypotheses influence how they observe, interpret, or record participants' behavior or study outcomes.</p> Signup and view all the answers

What is a primary way to correct for observer bias?

<p>Conduct a double-blind study.</p> Signup and view all the answers

What are placebo effects?

<p>Placebo effects occur when participants show a change or improvement in their condition simply because they believe they are receiving an effective treatment, even if the treatment is inert (like a sugar pill).</p> Signup and view all the answers

How can researchers control for placebo effects?

<p>By using a double-blind study design and including a placebo control group (a group that receives an inert treatment).</p> Signup and view all the answers

What are two methods used to check the construct validity of the independent variable (IV)?

<ol> <li>Using a manipulation check to directly measure if the IV manipulation worked as intended. 2. Conducting a pilot study to test the effectiveness and clarity of the manipulation before the main experiment.</li> </ol> Signup and view all the answers

What are two ways to check or support the construct validity of the dependent variable (DV)?

<ol> <li>Referencing previous research to see how the construct has been successfully measured before. 2. Assessing face validity, which is whether the measure <em>appears</em> to assess the intended construct on the surface.</li> </ol> Signup and view all the answers

A result is typically considered statistically significant in psychology if the p-value is _____.

<p>less than 0.05 (p &lt; 0.05)</p> Signup and view all the answers

If a p-value is _____ , the result is typically considered not statistically significant.

<p>greater than 0.05 (p &gt; 0.05)</p> Signup and view all the answers

What is Cohen's d?

<p>Cohen's d is a standardized measure of effect size that indicates the magnitude of the difference between two group means, expressed in standard deviation units.</p> Signup and view all the answers

According to Cohen's conventions, a d value of 0.20 represents a _____ effect, 0.50 represents a _____ effect, and 0.80 represents a _____ effect.

<p>small/weak, medium/moderate, large/strong</p> Signup and view all the answers

What is a one-way experimental design?

<p>An experimental design that involves only one independent variable (IV).</p> Signup and view all the answers

What defines a factorial design in experimental research?

<p>A factorial design is an experimental design that includes two or more independent variables (also called factors).</p> Signup and view all the answers

What is an interaction effect in a factorial design?

<p>An interaction effect occurs when the effect or influence of one independent variable on the dependent variable differs depending on the level of another independent variable.</p> Signup and view all the answers

In a factorial design, what is a main effect?

<p>A main effect is the overall effect of one independent variable on the dependent variable, averaging across all levels of the other independent variable(s). It asks if there is an overall difference associated with that IV.</p> Signup and view all the answers

How can you mathematically identify or test for an interaction effect in a factorial design?

<p>By looking for a &quot;difference in differences.&quot; This involves calculating the effect of one IV at each level of the other IV and then comparing these effects (by subtracting). If the differences are significantly different, an interaction is likely present.</p> Signup and view all the answers

What is a quasi-experiment?

<p>A research design that resembles a true experiment but lacks full experimental control, typically because participants cannot be randomly assigned to conditions and/or the independent variable is not directly manipulated by the researcher (it's often a naturally occurring grouping variable).</p> Signup and view all the answers

Describe a non-equivalent control group design.

<p>A quasi-experimental design involving at least two groups (e.g., a treatment group and a comparison group) that are not randomly assigned. Participants are measured only once, after one group has received the quasi-independent variable &quot;treatment.&quot;</p> Signup and view all the answers

What is a non-equivalent control group pretest/posttest design?

<p>A quasi-experimental design where participants in at least two non-randomly assigned groups are measured on the dependent variable both before and after one group is exposed to the quasi-independent variable.</p> Signup and view all the answers

What is an interrupted time-series design?

<p>A quasi-experimental design where a single group of participants is measured repeatedly on a dependent variable before, possibly during, and after an &quot;interruption&quot; or event (the quasi-independent variable).</p> Signup and view all the answers

Describe a nonequivalent control group interrupted time-series design.

<p>A quasi-experimental design that combines the interrupted time-series approach with a non-equivalent control group. It involves making repeated measurements on at least two non-randomly assigned groups before, during, and after an event or treatment occurs for one group but not the other.</p> Signup and view all the answers

Selection effects are primarily a threat in _____ designs. One way to assess (though not eliminate) this threat in such designs is to use a _____ measure.

<p>between-group (or independent-groups), pre-test/post-test</p> Signup and view all the answers

What type of study design is a key solution for preventing observer bias?

<p>A blind or, ideally, double-blind study.</p> Signup and view all the answers

How are placebo effects typically controlled for in experimental research?

<p>By including a placebo control group.</p> Signup and view all the answers

Briefly define a design confound.

<p>A design confound occurs when a potential third variable varies systematically along with the independent variable, providing an alternative explanation for the results.</p> Signup and view all the answers

List three key differences between quasi-experiments and true experiments.

<p>Quasi-experiments typically lack random assignment, participants might be selected based on pre-existing values of the quasi-independent variable, and they generally have lower internal validity compared to true experiments.</p> Signup and view all the answers

How do quasi-experiments differ from correlational studies?

<p>Quasi-experiments typically involve selecting participants based on specific categories or a range of values for the quasi-independent variable and exert more control over the study setting compared to correlational studies, which usually measure existing variables without manipulation or group selection.</p> Signup and view all the answers

List three potential threats to internal validity that can arise when testing participants multiple times (e.g., pre-test/post-test). What is a general solution to help rule these out?

<p>Potential threats include: 1. Maturation (natural changes in participants over time), 2. Testing/Instrumentation effects (changes due to repeated measurement or the measures themselves), and 3. Regression to the mean (extreme scores moving closer to the average on subsequent testing). A general solution is to include a comparison group that does not receive the intervention/treatment.</p> Signup and view all the answers

What is a direct replication in research?

<p>A replication study that attempts to reproduce an original study as closely as possible, using the same conceptual variables and the same operational definitions (procedures, measures).</p> Signup and view all the answers

Define conceptual replication.

<p>A replication study that investigates the same research question (same conceptual variables) as an original study but uses different procedures or operational definitions for the variables.</p> Signup and view all the answers

What is a replication-plus-extension study?

<p>A study where researchers replicate an original finding (using either direct or conceptual replication methods) but also add new variables or conditions to test additional questions or explore boundary conditions.</p> Signup and view all the answers

What is a meta-analysis?

<p>A statistical technique that mathematically averages the results (typically effect sizes) from multiple studies that have investigated the same research question, in order to provide an overall estimate of the effect and understand the consistency of findings.</p> Signup and view all the answers

Describe the "generalization mode" of research. What types of claims are often associated with it, and what type of validity is prioritized?

<p>&quot;Generalization mode&quot; refers to research where the primary goal is to make a claim about a larger population based on the sample studied. This often involves frequency claims, requires findings relevant to real-world contexts, and prioritizes external validity.</p> Signup and view all the answers

Describe the "theory-testing mode" of research. What types of claims are often tested, and what type of validity is typically prioritized?

<p>&quot;Theory-testing mode&quot; refers to research where the primary goal is to rigorously test theoretical relationships, often involving association or causal claims, by isolating variables. This mode prioritizes internal validity, sometimes using controlled, artificial situations, with external validity being a secondary concern addressed later.</p> Signup and view all the answers

What type of claim describes the rate or degree of a single variable, often expressed as a percentage?

<p>Frequency Claim</p> Signup and view all the answers

What type of claim suggests that one variable correlates or is related to another?

<p>Association Claim</p> Signup and view all the answers

What type of claim asserts that one variable is responsible for causing a change in another?

<p>Causal Claim</p> Signup and view all the answers

What technique ensures that each participant has an equal chance of being assigned to any condition within an experiment?

<p>Random Assignment</p> Signup and view all the answers

In establishing causality, what criterion refers to the requirement that the proposed causal variable must be related to the proposed outcome variable?

<p>Covariance</p> Signup and view all the answers

What criterion for causality establishes that the proposed causal variable must occur before the proposed outcome variable in time?

<p>Temporal Precedence</p> Signup and view all the answers

What type of validity refers to the degree to which an experiment can confidently conclude that the independent variable, and not some other factor, caused the change in the dependent variable?

<p>Internal Validity</p> Signup and view all the answers

According to the rules for establishing causality, how is covariance typically established in an experiment?

<p>It is established by demonstrating a difference between the group means of the different levels of the independent variable. This requires having a comparison or control group.</p> Signup and view all the answers

According to the rules for establishing causality, how is temporal precedence typically established in an experiment?

<p>The independent variable (IV) is manipulated <em>before</em> the dependent variable (DV) is measured.</p> Signup and view all the answers

According to the rules for establishing causality, how is internal validity typically established or strengthened in an experiment?

<p>By using control variables to keep potential confounds constant and by using random assignment to distribute participant characteristics evenly across conditions.</p> Signup and view all the answers

_____ variance occurs when an extraneous variable fluctuates predictably along with the levels of the independent variable, potentially creating a confound.

<p>Systematic</p> Signup and view all the answers

_____ variance occurs when an extraneous variable fluctuates randomly across all levels of the independent variable, adding noise but not typically creating a confound.

<p>Unsystematic</p> Signup and view all the answers

What is the term for an extraneous variable that varies systematically with the independent variable and provides an alternative explanation for the results, thus threatening internal validity?

<p>Confound Variable (or Confound)</p> Signup and view all the answers

What specific type of confound occurs when the experiment's design or procedure inadvertently allows another variable to vary systematically along with the independent variable?

<p>Design Confound</p> Signup and view all the answers

What specific type of confound occurs when the characteristics of the participants in different groups vary systematically with the independent variable?

<p>Selection Effect</p> Signup and view all the answers

What is the primary method for correcting or minimizing design confounds?

<p>Using control variables; ensuring that the only systematic difference between conditions is the level of the independent variable.</p> Signup and view all the answers

What are the primary methods for correcting or minimizing selection effects?

<p>Random assignment or using matched groups.</p> Signup and view all the answers

In what general type of experimental design is each participant assigned to only one level of the independent variable?

<p>Independent-Groups Design (or Between-Groups Design)</p> Signup and view all the answers

In what general type of experimental design does each participant experience all levels of the independent variable?

<p>Within-Groups Design (or Within-Subjects Design)</p> Signup and view all the answers

What specific type of within-groups design involves participants being exposed to all levels of the independent variable at roughly the same time?

<p>Concurrent Measures Design</p> Signup and view all the answers

What specific type of within-groups design involves participants being measured on a dependent variable more than once, after exposure to each level of the independent variable presented sequentially?

<p>Repeated Measures Design</p> Signup and view all the answers

What is the term for the potential confound in within-groups designs where exposure to one condition influences responses to subsequent conditions?

<p>Order Effects</p> Signup and view all the answers

What procedural technique is used specifically to correct for order effects in within-groups designs?

<p>Counterbalancing</p> Signup and view all the answers

What threat to internal validity occurs when participants systematically drop out of a study over time, especially common in pre-test/post-test designs?

<p>Attrition (or Mortality)</p> Signup and view all the answers

How can researchers assess and potentially mitigate the impact of attrition?

<p>Compare the pretest scores (or other baseline characteristics) of participants who dropped out with those who completed the study. Researchers might also choose to exclude data from any participant who did not complete all parts of the study.</p> Signup and view all the answers

What threat to internal validity describes a change in participant behavior or scores simply due to having taken a test or measurement more than once (e.g., practice or fatigue effects)?

<p>Testing Effects</p> Signup and view all the answers

What threat to internal validity occurs due to changes in the measuring instrument or changes in the observers/coders over time?

<p>Instrumentation Threat (or Instrument Decay)</p> Signup and view all the answers

What are potential corrections or considerations for addressing testing effects and instrumentation threats?

<p>For testing: Use a post-test only design if possible, use alternative forms of the test, or counterbalance test versions. For instrumentation: Ensure consistent calibration and use of instruments, conduct retraining for observers/coders.</p> Signup and view all the answers

What occurs when participants discern the study's purpose or hypothesis and consciously or unconsciously alter their behavior to align with perceived expectations?

<p>Demand Characteristics</p> Signup and view all the answers

What type of bias happens when a researcher's own expectations or hypotheses unintentionally influence how they observe, interpret, or record participant behavior or data?

<p>Observer Bias</p> Signup and view all the answers

What is a primary research design strategy used to correct for both observer bias and demand characteristics?

<p>Conducting a double-blind study.</p> Signup and view all the answers

What is the term for a phenomenon where participants experience a change or improvement simply because they believe they are receiving an effective treatment, even if the treatment is inert?

<p>Placebo Effect</p> Signup and view all the answers

What research design features are used to control for placebo effects?

<p>Using a double-blind study design and including a placebo comparison group that receives an inert treatment disguised as the real treatment.</p> Signup and view all the answers

What are two methods used to evaluate the construct validity of the independent variable manipulation in an experiment?

<p>Manipulation checks and pilot studies.</p> Signup and view all the answers

What are two considerations used to evaluate the construct validity of the dependent variable measure in an experiment?

<p>Consulting previous research (how the variable has been successfully measured before) and assessing face validity (whether the measure appears, on the surface, to assess the intended construct).</p> Signup and view all the answers

In null hypothesis significance testing (NHST), what p-value threshold typically indicates that an observed result is unlikely due to chance alone, leading to the rejection of the null hypothesis?

<p>$p &lt; 0.05$</p> Signup and view all the answers

In null hypothesis significance testing (NHST), what p-value range typically indicates that an observed result is reasonably likely to have occurred by chance alone under the null hypothesis, leading to a failure to reject the null hypothesis?

<p>$p &gt; 0.05$</p> Signup and view all the answers

What statistic, often symbolized as 'd', provides a standardized measure of the difference between two group means, expressed in standard deviation units, indicating the magnitude of an effect?

<p>Cohen's d</p> Signup and view all the answers

Match Cohen's d values to their conventional effect size descriptions:

<p>0.20 = Small / Weak 0.50 = Medium / Moderate 0.80 = Large / Strong</p> Signup and view all the answers

What type of experimental design involves manipulating only one independent variable?

<p>One-Way Design</p> Signup and view all the answers

Experimental designs that involve two or more independent variables (also called 'factors') are known as _____ designs.

<p>Factorial</p> Signup and view all the answers

In a factorial design, what type of effect occurs when the effect of one independent variable on the dependent variable depends on the level of another independent variable?

<p>Interaction Effect</p> Signup and view all the answers

In a factorial design, what is the term for the overall effect of one independent variable on the dependent variable, ignoring or averaging across the levels of the other independent variable(s)?

<p>Main Effect</p> Signup and view all the answers

An interaction effect in a factorial design essentially asks if there is a 'difference in differences' between the effects of one IV at different levels of another IV.

<p>True (A)</p> Signup and view all the answers

What type of research design involves an independent variable and a dependent variable but lacks full experimental control, most notably the ability to use random assignment?

<p>Quasi-Experiment</p> Signup and view all the answers

What specific type of quasi-experimental design compares two or more pre-existing groups after some event or condition has occurred, using only a post-test measure?

<p>Non-Equivalent Group Design</p> Signup and view all the answers

What quasi-experimental design attempts to address some limitations of the basic non-equivalent group design by measuring the dependent variable in two or more pre-existing groups both before and after an event or intervention occurs?

<p>Non-equivalent Control Group Pretest/Posttest Design</p> Signup and view all the answers

What quasi-experimental design involves taking multiple measurements on a single group before and after some intervening event or 'interruption' occurs?

<p>Interrupted Time Series Design</p> Signup and view all the answers

What complex quasi-experimental design combines features of the interrupted time series and non-equivalent control group designs by taking multiple pre- and post-intervention measures on two or more non-equivalent groups?

<p>Nonequivalent Control Group Interrupted Time-Series Design</p> Signup and view all the answers

Selection effects, where participants in different groups systematically differ before the study begins, are primarily a concern for - designs.

<p>independent-groups (or between-groups)</p> Signup and view all the answers

In true experiments, _____ _____ is the primary method to control for selection effects. In quasi-experiments where this isn't possible, using a / design can help assess initial group differences.

<p>random assignment, pretest/posttest</p> Signup and view all the answers

What research design feature is a key solution to minimize observer bias?

<p>Using a Blind or Double-blind study design.</p> Signup and view all the answers

What component is essential in an experimental design to control for placebo effects?

<p>Including a Placebo-control group.</p> Signup and view all the answers

If a potential third variable systematically varies along with the independent variable due to the way the study was set up (e.g., one condition is tested in the morning, the other in the evening), what type of confound is this?

<p>Design Confound</p> Signup and view all the answers

List two key differences between a quasi-experiment and a true experiment.

<ol> <li>Quasi-experiments lack random assignment. 2. In quasi-experiments, the independent variable is often measured or selected, not manipulated by the researcher.</li> </ol> Signup and view all the answers

How does a quasi-experiment typically differ from a correlational study?

<p>Quasi-experiments usually involve comparing selected groups based on IV levels and exert more control over study procedures, whereas correlational studies typically measure variables as they naturally exist without grouping or manipulation.</p> Signup and view all the answers

Identify three potential threats to internal validity that are particularly relevant when participants are measured multiple times (e.g., in pretest/posttest or within-subjects designs) and no comparison group is used.

<p>Maturation (changes due to time), Testing effects (changes due to repeated measurement), Instrumentation (changes in the measure itself), and Regression to the mean (extreme scores becoming less extreme).</p> Signup and view all the answers

What type of replication involves conducting a study using the exact same procedures and operationalizations as an original study?

<p>Direct Replication</p> Signup and view all the answers

What type of replication involves testing the same research question (same conceptual variables) as an original study but using different procedures or operationalizations?

<p>Conceptual Replication</p> Signup and view all the answers

What type of study repeats aspects of an original study but also adds new variables or conditions to test additional questions?

<p>Replication-Plus-Extension</p> Signup and view all the answers

What is the term for a statistical technique that mathematically averages the effect sizes from multiple studies investigating the same research question?

<p>Meta-analysis</p> Signup and view all the answers

Which 'mode' of research primarily aims to make claims about a specific population, often focuses on frequency claims, and prioritizes external validity?

<p>Generalization Mode</p> Signup and view all the answers

Which 'mode' of research primarily aims to test theoretical predictions rigorously, often focuses on association and causal claims, prioritizes internal validity, and may use controlled, artificial settings?

<p>Theory-Testing Mode</p> Signup and view all the answers

What type of research claim describes the rate or degree of a single variable, often expressed as a percentage?

<p>Frequency Claim</p> Signup and view all the answers

What type of research claim suggests that one variable correlates with another?

<p>Association Claim</p> Signup and view all the answers

What type of research claim argues that one variable causes a change in another?

<p>Causal Claim</p> Signup and view all the answers

What is the process called where each participant has an equal chance of ending up in each condition of an experiment?

<p>Random Assignment</p> Signup and view all the answers

In establishing causation, what criterion asks if variable A relates to variable B?

<p>Covariance</p> Signup and view all the answers

What criterion for establishing causation ensures that the causal variable (A) occurred before the outcome variable (B)?

<p>Temporal Precedence</p> Signup and view all the answers

What term describes the extent to which an experiment ensures that variable A caused variable B, with no other plausible alternative explanations?

<p>Internal Validity</p> Signup and view all the answers

How is the covariance rule typically established in an experiment?

<p>It is established by demonstrating a difference between group means, which requires having a comparison or control group.</p> Signup and view all the answers

How is the temporal precedence rule typically established in an experiment?

<p>The Independent Variable (IV) is manipulated <em>before</em> the Dependent Variable (DV) is measured.</p> Signup and view all the answers

How is the internal validity rule typically upheld in an experimental study?

<p>By using control variables in the study and employing random assignment.</p> Signup and view all the answers

What type of variance occurs when another variable fluctuates systematically with the levels of the IV, is not controlled for, and potentially contributes to confounds?

<p>Systematic Variance</p> Signup and view all the answers

What type of variance occurs when another variable fluctuates randomly across all levels of the IV and does not typically lead to confounds?

<p>Unsystematic Variance</p> Signup and view all the answers

What is the term for an extraneous variable that varies systematically with the independent variable and provides an alternative explanation for the results?

<p>Confound Variable</p> Signup and view all the answers

What occurs when another variable varies systematically with an Independent Variable (IV), threatening internal validity?

<p>Confound Variable</p> Signup and view all the answers

What type of confound occurs when the experimental design itself allows another variable to vary systematically with the IV?

<p>Design Confound</p> Signup and view all the answers

What type of confound occurs when participant characteristics vary systematically with the levels of the IV?

<p>Selection Effect</p> Signup and view all the answers

How can researchers correct for design confounds?

<p>By using Control Variables and ensuring there are no differences between conditions other than the intended levels of the IV.</p> Signup and view all the answers

How can researchers correct for selection effects?

<p>By using random assignment or matched groups.</p> Signup and view all the answers

What type of experimental design involves assigning each participant to only one level of the IV?

<p>Independent-Groups Designs (or Between-Subjects Designs)</p> Signup and view all the answers

What type of experimental design involves each participant completing all levels of the IV?

<p>Within-Groups Designs (or Within-Subjects Designs)</p> Signup and view all the answers

In a within-groups design, what is it called when participants experience all levels of the IV at roughly the same time?

<p>Concurrent Measures</p> Signup and view all the answers

In a within-groups design, what type of measure involves participants being measured on the DV after exposure to each level of the IV, often with the levels presented sequentially?

<p>Repeated Measures</p> Signup and view all the answers

What is the potential issue in within-groups designs where exposure to one condition changes how participants respond to a later condition?

<p>Order Effects</p> Signup and view all the answers

How can researchers correct for order effects in within-groups designs?

<p>Counterbalancing</p> Signup and view all the answers

What is the term for participants dropping out of a study before it is completed, especially problematic in designs with multiple measurement points?

<p>Attrition (or mortality)</p> Signup and view all the answers

How can researchers address the potential impact of attrition?

<p>Compare pretest scores of those who dropped out with those who stayed; consider excluding participants with incomplete data.</p> Signup and view all the answers

What threat to internal validity involves a change in participants' scores over time simply because they have taken a test more than once?

<p>Testing Effects</p> Signup and view all the answers

What threat to internal validity occurs when the measuring instrument changes over time?

<p>Instrumentation Effects (or Instrument Decay)</p> Signup and view all the answers

How can researchers address or correct for potential instrumentation effects?

<p>Consider using a post-test only design (if appropriate) or ensure instruments are calibrated consistently and observers use standardized coding.</p> Signup and view all the answers

What occurs when participants guess the study's hypothesis and change their behavior to align with (or sometimes against) the perceived expectations?

<p>Demand Characteristics</p> Signup and view all the answers

What type of bias occurs when a researcher's expectations influence their interpretation of the results or their interactions with participants?

<p>Observer Bias (or Experimenter Bias)</p> Signup and view all the answers

How can researchers primarily correct for observer bias?

<p>Conduct a double-blind study.</p> Signup and view all the answers

What phenomenon describes a change in participants' condition due simply to their belief that they are receiving an effective treatment?

<p>Placebo Effects</p> Signup and view all the answers

How can researchers control for or correct for placebo effects?

<p>Use a double-blind study design and include a placebo comparison group.</p> Signup and view all the answers

What are two ways to check the construct validity of the independent variable (IV) manipulation in an experiment?

<ol> <li>Manipulation check (directly measuring if the IV manipulation worked as intended). 2. Pilot Study (a small trial run to test procedures and manipulations).</li> </ol> Signup and view all the answers

What are two ways to check the construct validity of the dependent variable (DV) measure in an experiment?

<ol> <li>Referencing Previous Research (using measures established in prior studies). 2. Assessing Face Validity (evaluating if the measure appears to assess the construct of interest).</li> </ol> Signup and view all the answers

In inferential statistics, what p-value threshold typically indicates a result is statistically significant?

<p>p &lt; 0.05</p> Signup and view all the answers

In inferential statistics, what p-value range typically indicates a result is not statistically significant?

<p>p &gt; 0.05</p> Signup and view all the answers

What is Cohen's d?

<p>A standardized measure of effect size indicating the magnitude of the difference between two group means, expressed in standard deviation units.</p> Signup and view all the answers

According to Cohen's conventions for effect size (d), what value represents a medium or moderate effect?

<p>0.50 (A)</p> Signup and view all the answers

What type of experimental design involves only one independent variable?

<p>One-Way Designs</p> Signup and view all the answers

What type of experimental design involves two or more independent variables (factors)?

<p>Factorial Designs</p> Signup and view all the answers

In a factorial design, what is it called when the influence of one independent variable on the dependent variable changes depending on the level of another independent variable?

<p>Interaction Effect</p> Signup and view all the answers

In a factorial design, what is the overall effect of one independent variable on the dependent variable, averaging across the levels of the other independent variable(s)?

<p>Main Effect</p> Signup and view all the answers

How do researchers typically assess whether there is an interaction effect in a factorial design?

<p>By looking for a 'difference in differences'; comparing the simple effects of one IV at different levels of the other IV.</p> Signup and view all the answers

What type of study resembles an experiment but lacks full experimental control, particularly because participants cannot be randomly assigned to conditions?

<p>Quasi-Experiments</p> Signup and view all the answers

What type of quasi-experimental design involves comparing two or more non-randomly assigned groups only after the 'independent variable' has occurred (i.e., only a post-test)?

<p>Non-Equivalent Group Design</p> Signup and view all the answers

What type of quasi-experimental design involves comparing two or more non-randomly assigned groups both before and after the 'independent variable' occurs?

<p>Non-equivalent Control Group Pretest/Posttest Design</p> Signup and view all the answers

What type of quasi-experimental design involves measuring a single group repeatedly before, during, and after some event or 'intervention' (the 'IV')?

<p>Interrupted Time Series Design</p> Signup and view all the answers

What type of quasi-experimental design combines features of the interrupted time series and non-equivalent control group designs, following two or more groups over time before, during, and after an event/intervention that affects only one group (or affects groups differently)?

<p>Nonequivalent Control Group Interrupted Time-Series Design</p> Signup and view all the answers

Selection Effects are only a problem for _____ designs. The solution is to use a _____ design.

<p>between-group, pre-test/post-test</p> Signup and view all the answers

What is a primary solution to mitigate observer bias?

<p>Using a Blind or Double-blind study design.</p> Signup and view all the answers

What is a primary solution to control for placebo effects?

<p>Including a Placebo-control group (often within a double-blind design).</p> Signup and view all the answers

What potential threat to internal validity occurs when a third variable systematically varies along with the independent variable?

<p>Design Confounds</p> Signup and view all the answers

What are key differences between a true experiment and a quasi-experiment?

<p>Quasi-experiments lack random assignment; subjects are often selected based on existing characteristics or group memberships (IV values are not fully manipulated by the researcher). Consequently, quasi-experiments typically have lower internal validity than true experiments.</p> Signup and view all the answers

How do quasi-experiments differ from correlational studies?

<p>Quasi-experiments typically involve comparing groups based on some 'independent' variable (even if not randomly assigned or fully manipulated) and often select a specific range of IV values, offering more control than typical correlational studies which measure variables as they naturally exist.</p> Signup and view all the answers

What are potential problems when testing participants multiple times (e.g., pre-test and post-test), and what is a common solution?

<p>Problems include maturation (natural changes in participants over time), testing effects (changes due to repeated testing), instrumentation effects (changes in the measure), and regression to the mean (extreme scores becoming less extreme on retesting). A common solution is to include a comparison group that does not receive the intervention but undergoes the same testing schedule.</p> Signup and view all the answers

What type of replication involves conducting a study using the same conceptual variables but the exact same operationalizations as the original study?

<p>Direct Replication</p> Signup and view all the answers

What type of replication involves conducting a study using the same conceptual variables but different operationalizations from the original study?

<p>Conceptual Replication</p> Signup and view all the answers

What type of replication involves repeating the original study's methods (same conceptual variables) but also adding new variables or conditions?

<p>Replication-Plus-Extension</p> Signup and view all the answers

What research technique involves mathematically averaging the effect sizes from multiple studies on the same topic to determine the overall conclusion supported by the evidence?

<p>Meta-analysis</p> Signup and view all the answers

Which 'mode' of research prioritizes generalizing findings to a broader population, often focusing on frequency claims where real-world applicability is key and external validity is essential?

<p>Generalization Mode</p> Signup and view all the answers

Which 'mode' of research prioritizes rigorously testing a theory and isolating variables, often focusing on association and causal claims where internal validity is paramount, even if it requires somewhat artificial situations?

<p>Theory-Testing Mode</p> Signup and view all the answers

Flashcards

Frequency Claim

Describes rate or degree of a single variable using percentages.

Association Claim

States that one variable correlates with another.

Causal Claim

States that one variable causes a change in another.

Random Assignment

Each participant has an equal chance of being assigned to each condition.

Signup and view all the flashcards

Covariance

Determines if variable A relates to variable B.

Signup and view all the flashcards

Temporal Precedence

Establishes that variable A precedes variable B.

Signup and view all the flashcards

Internal Validity

Indicates that A caused B in the experiment with no other variables affecting the outcome.

Signup and view all the flashcards

Covariance Rule

Established through the difference between group means, requiring a comparison or control group.

Signup and view all the flashcards

Temporal Precedence Rule

The independent variable is manipulated before the dependent variable is measured.

Signup and view all the flashcards

Internal Validity Rule

Control variables are present, and random assignment is used.

Signup and view all the flashcards

Systematic Variance

Variable that fluctuates with levels of the IV and impacts results.

Signup and view all the flashcards

Unsystematic Variance

Another variable fluctuates across all levels of the IV.

Signup and view all the flashcards

Confound Variable

A variable (not being studied) that can affect the outcome.

Signup and view all the flashcards

Confound Variable

When another variable varies systematically with an IV.

Signup and view all the flashcards

Design Confound

The experiment is designed such that another variable varies systematically with the IV.

Signup and view all the flashcards

Selection Effect

Participant characteristics vary systematically with the IV.

Signup and view all the flashcards

Control variables

There should be no differences between conditions, other than levels of the IV.

Signup and view all the flashcards

Random Assignment or Matched Groups

Participants should all have an equal chance of assignment to each condition.

Signup and view all the flashcards

Independent-Groups Designs

Each participant is assigned to one level of the IV.

Signup and view all the flashcards

Within-Groups Designs

Each participant completes all levels of the IV.

Signup and view all the flashcards

Concurrent Measures

Experiencing both IV levels at the same time.

Signup and view all the flashcards

Repeated Measures

Groups experience the same variables but in different orders.

Signup and view all the flashcards

Order Effects

Exposure to one condition changes responses to a later condition.

Signup and view all the flashcards

Counterbalancing

Participants complete levels of IV in different sequences, with random assignment of order.

Signup and view all the flashcards

Attrition

Participants drop out of the study.

Signup and view all the flashcards

Correction to Attrition

Compare pretest scores of those who dropped out with those who stayed and exclude anyone with incomplete data.

Signup and view all the flashcards

Testing

A change due to taking a test more than once.

Signup and view all the flashcards

Instrumentation

Changes due to instruments being used changing.

Signup and view all the flashcards

Correction to Instrumentation

Consider using a post-test only design. Ensure instruments are set up correctly.

Signup and view all the flashcards

Demand Characteristics

Participants guess the study's purpose and change their behavior accordingly.

Signup and view all the flashcards

Observer Bias

Researcher's expectations influence the interpretation of results.

Signup and view all the flashcards

Correction of Observer Bias

Conduct a double-blind study.

Signup and view all the flashcards

Placebo Effects

Change due to believing they are receiving treatment.

Signup and view all the flashcards

Correction to Placebo Effects

Double-blind study. Include placebo comparison group.

Signup and view all the flashcards

Checking Construct Validity of the IV

Manipulation check. Pilot Study to check variables.

Signup and view all the flashcards

Checking Construct Validity of the DV

Previous Research, Face Validity.

Signup and view all the flashcards

Statistically Significant

p < 0.05

Signup and view all the flashcards

Not Statistically Significant

p > 0.05

Signup and view all the flashcards

Cohen D

A standardized mean difference used as a measure of effect size.

Signup and view all the flashcards

Cohen d Effect Sizes

0.20= small, weak. 0.50= medium, moderate. 0.80= large, strong.

Signup and view all the flashcards

One-Way Designs

Only 1 IV. Answers simple Questions.

Signup and view all the flashcards

Factorial Designs

2 or more IV (AKA "factors"). Answers more complex questions.

Signup and view all the flashcards

Interaction Effect

The influence of one IV on the DV changes depending on the level of another IV.

Signup and view all the flashcards

Main Effect

Is the effect of one IV on the DV, averaging across levels of the other IV.

Signup and view all the flashcards

Interactions

Occurs when the effect of an IV on the DV depends on the levels of the other IV. In other words, is there a difference in differences.

Signup and view all the flashcards

Quasi Experiments

Has an "independent" and dependent variable but lacks full experimental control. No random assignment possible

Signup and view all the flashcards

Non-Equivalent Group Design

Tested only after IV and follows 2+ groups. Selection effects issue due to the fact that perhaps most people in greek life are already more outgoing than those not in greek life.

Signup and view all the flashcards

Non-equivalent Control Group Pretest/Posttest Design

Tested before & after IV follows 2+ groups.

Signup and view all the flashcards

Interrupted Time Series Design

Tested before, during, and after "IV" and follows 1 group.

Signup and view all the flashcards

Nonequivalent Control Group Interrupted Time-Series Design

Tested before, during, & after "IV" and follows 2+ groups.

Signup and view all the flashcards

Between-group designs Pre-test/Post-test

Selection Effects is only a problem for Between-group designs Pre-test/Post-test

Signup and view all the flashcards

Blind/ Double-blind study

Observer Bias Solution

Signup and view all the flashcards

Placebo- control group

Placebo Solution

Signup and view all the flashcards

Design Confounds

Possible 3rd variable systematically varies with IV

Signup and view all the flashcards

Quasi Experiment

  • 0 random assignment
  • subjects selected based off IV values
  • less internal validity
Signup and view all the flashcards

Quasi Correlation

  • only select range of values of IV
  • more control
Signup and view all the flashcards

Problem When Testing Multiple Times

  • Maturation of participant
  • Testing/ Instrumentation is off
  • Regression to the mean (only happens when scores= extreme) Solution: Comparison group
Signup and view all the flashcards

Direct Replication

Study involves same conceptual variables, same operationalizations

Signup and view all the flashcards

Conceptual Replication

Study involves same conceptual variables, different operationalizations

Signup and view all the flashcards

Replication-Plus-Extension

Study involves same conceptual variables, plus new variables

Signup and view all the flashcards

Meta-analysis

A way of mathematically averaging effect sizes of studies to see what conclusion the weight of the evidence supports

Signup and view all the flashcards

Generalization Mode

  • Frequency claims
  • Goal= make a claim about a population
  • Real-world matters External validity is essential
Signup and view all the flashcards

Theory-Testing Mode

  • Association and causal claims
  • Goal= test a theory rigorously, isolate variables
  • Prioritize internal validity
  • Artificial situations may be required
  • Real world comes later External validity is not the priority
Signup and view all the flashcards

Study Notes

  • Psychology research methods involve various claims, designs, and considerations for validity and potential issues.

Types of Claims

  • Frequency Claim describes the rate or degree of a single variable, expressed as a percentage.
  • Association Claim suggests one variable correlates with another, for example, more sleep is related to a better mood.
  • Causal Claim indicates that one variable causes a change in another, for example, eating chocolate increases life satisfaction.

Key Experimental Concepts

  • Random Assignment ensures each participant has an equal chance of being placed in each condition.
  • Covariance assesses whether variable A is related to variable B.
  • Temporal Precedence confirms that variable A precedes variable B, suggesting a causal relationship.
  • Internal Validity ensures that variable A caused variable B in the experiment, without other variables affecting the outcome.

Rules for Establishing Causation

  • Covariance is established through the difference between group means, requiring a comparison or control group.
  • Temporal Precedence requires the independent variable (IV) to be manipulated before the dependent variable (DV) is measured.
  • Internal Validity requires control variables and random assignment in the study.

Types of Variance

  • Systematic Variance fluctuates with the levels of the IV and is not controlled for, contributing to confounding variables.
  • Unsystematic Variance fluctuates across all levels of the IV randomly and does not lead to confounds.

Confounding Variables

  • A confounding variable is an extraneous factor that affects the outcome; you always miss something.
  • A confounding variable varies systematically with an IV, threatening internal validity.
  • Design Confound occurs when an experiment is designed such that another variable varies systematically with the IV.
  • Selection Effect arises when participant characteristics vary systematically with the IV.

Correcting Confounding Variables

  • Control Variables aim to eliminate differences between conditions other than the levels of the IV.
  • Random assignment or matched groups ensures participants have an equal chance of being assigned to each condition, correcting selection effects.

Experimental Designs

  • Independent-Groups Designs assign each participant to only one level of the IV, using either post-test only or pre-test and post-test.
  • Within-Groups Designs involve each participant completing all levels of the IV, using concurrent or repeated measures.
  • Concurrent Measures involve experiencing both IV levels at the same time.
  • Repeated Measures involve groups experiencing the same variables in different orders, and comparing groups to a topic of interest.

Order Effects and Counterbalancing

  • Order Effects occur when exposure to one condition changes responses to a later condition and this is a type of design confound.
  • Counterbalancing corrects order effects by having participants complete levels of the IV in different sequences, with the order randomly assigned.

Threats to Validity and Corrections

  • Attrition happens when participants drop out and this can be addressed by comparing pretest scores of those who dropped out with those who stayed, and excluding anyone with incomplete data.
  • Testing refers to changes due to taking a test more than once, where doing something multiple times leads to improved performance or decreased interest.
  • Instrumentation involves changes due to instruments being used changing, and this can be addressed by using a post-test only design and ensuring instruments are set up correctly.
  • Demand Characteristics occur when participants guess the study's purpose and change their behavior to meet expectations, so use a blind study.
  • Observer Bias occurs when a researcher's expectations influence the interpretation of results, so conduct a double-blind study.
  • Placebo Effects involve change due to believing they are receiving treatment, and this can be addressed using a double-blind study and including a placebo comparison group.

Validity Checks

  • Manipulation checks assess the IV.
  • Pilot Studies are used to check variables.
  • Checking Construct Validity of the Dependent Variable can be done through previous research and face validity.

Statistical Significance

  • Statistical Significance is indicated by p < 0.05.
  • Not Statistically Significant is indicated by p > 0.05.

Effect Size

  • Cohen's d is a standardized mean difference used to measure effect size.
    1. 20 represents a small, weak effect size.
    1. 50 represents a medium, moderate effect size.
    1. 80 represents a large, strong effect size.

Experimental Designs with Multiple IVs

  • One-Way Designs involve only 1 IV and answer simple questions.
  • Factorial Designs involve two or more IVs (factors) and answer more complex questions.

Interaction and Main Effects

  • Interaction Effect occurs when the influence of one IV on the DV changes depending on the level of another IV.
  • Main Effect assesses the overall difference, indicating the effect of one IV on the DV, averaging across levels of the other IV and this is found by comparing means.
  • Interactions assesses if there is a difference in differences and this occurs when the effect of an IV on the DV depends on the levels of the other IV, and is be found by comparing differences (subtract).

Quasi-Experiments

  • Quasi Experiments have an "independent" and dependent variable but lacks full experimental control.
  • There is no random assignment possible.
  • The IV is not manipulated by the experimenter.
  • Non-Equivalent Group Design tests 2+ groups only after the IV, only performs a post-test.
  • Non-equivalent Control Group Pretest/Posttest Design tests 2+ groups before & after the IV.
  • Interrupted Time Series Design tests one group before, during, and after the "IV".
  • Nonequivalent Control Group Interrupted Time-Series Design tests 2+ groups before, during, and after the "IV".

Addressing Threats in Quasi-Experiments

  • Selection Effects are only a problem for between-group designs.
  • Blind or double-blind studies address observer bias.
  • Placebo-control groups address placebo effects.
  • Design Confounds are when a possible 3rd variable systematically varies with IV.

Quasi-Experiment vs. Experiment

  • Quasi experiments have no random assignment.
  • Subjects are selected based off IV values.
  • Quasi experiments have less internal validity than experiments.

Quasi-Experiment vs. Correlation

  • Quasi experiments only select a range of values of IV.
  • Quasi Experiments have more control.

Problems When Testing Multiple Times

  • Maturation of participant, testing/instrumentation, and regression to the mean.
  • These can be addressed with a comparison group.

Replication Types

  • Direct Replication involves the same conceptual variables, same operationalizations.
  • Conceptual Replication involves the same conceptual variables, but different operationalizations.
  • Replication-Plus-Extension involves the same conceptual variables, plus new variables.

Meta-Analysis

  • Meta-analysis mathematically averages effect sizes of studies to determine the weight of evidence.

Research Modes

  • Generalization Mode is for frequency claims where the goal is to make a claim about a population, real-world matters, and external validity is essential.
  • Theory-Testing Mode is for association and causal claims where the goal is to test a theory rigorously, prioritize internal validity, and external validity is not the priority.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

More Like This

Use Quizgecko on...
Browser
Browser