Research Methods: Key Concepts

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

A researcher seeks out studies that support their hypothesis while overlooking contradictory findings. Which bias does this exemplify?

  • Selection bias
  • Omitted variable bias
  • Confirmation bias (correct)
  • Publication bias

Which research activity illustrates deductive reasoning?

  • Formulating a general theory based on specific experimental findings
  • Observing that all observed swans are white and concluding that all swans are white
  • Starting with a general theory about political behavior to predict individual voting patterns (correct)
  • Using survey data to identify emerging trends in consumer preferences

What is indicated when a research study favors publishing only positive results, neglecting null or negative findings?

  • The study has high external validity.
  • The study is likely influenced by bias against negative results. (correct)
  • The study avoids Type II errors.
  • The study effectively uses falsification.

In an experiment examining the effect of a new drug on reaction time, what is the independent variable?

<p>The dosage of the new drug (C)</p> Signup and view all the answers

In a study assessing the impact of a tutoring program on student grades, what represents the dependent variable?

<p>The students' final grades (B)</p> Signup and view all the answers

What key principle is applied when researchers attempt to disprove a hypothesis by testing its predictions?

<p>Falsification (D)</p> Signup and view all the answers

What characteristic defines a scientific theory according to the principle of falsifiability?

<p>It must be testable and potentially disprovable. (D)</p> Signup and view all the answers

What does internal validity in a research study indicate?

<p>The degree to which a causal relationship between variables is established (B)</p> Signup and view all the answers

What aspect of a study does external validity address?

<p>The generalizability of the findings to other populations and settings (B)</p> Signup and view all the answers

Which type of variable is 'temperature in Celsius'?

<p>Interval (A)</p> Signup and view all the answers

What is the primary assumption of a null hypothesis?

<p>There is no effect or relationship between the variables under investigation. (A)</p> Signup and view all the answers

A hypothesis that combines two distinct claims into one statement is known as what?

<p>A double-barreled hypothesis (C)</p> Signup and view all the answers

What is the key characteristic of teleological arguments?

<p>They explain phenomena based on predetermined purposes or goals. (B)</p> Signup and view all the answers

A study examining interactions between pairs of countries is using what as its unit of analysis?

<p>Dyads (C)</p> Signup and view all the answers

What is the definition of 'ecological fallacy' in research?

<p>Assuming that what is true for a group is also true for an individual from that group (D)</p> Signup and view all the answers

Which statistical method is used to compare changes over time between a treatment group and a control group?

<p>Difference-in-differences (A)</p> Signup and view all the answers

Oversimplifying complex issues, such as asserting that all wars are due to economic factors, exemplifies what?

<p>Reductionism (A)</p> Signup and view all the answers

What is the term for hidden variables that affect the results of a study?

<p>Confounding factors (C)</p> Signup and view all the answers

A researcher only studies successful revolutions to determine their causes. What type of problem is this?

<p>Selection on the dependent variable (C)</p> Signup and view all the answers

What is a key advantage of using a case study method in research?

<p>It provides deep, detailed understanding of a specific case. (A)</p> Signup and view all the answers

Which of the following is a weakness of the case study method?

<p>Findings may not apply to other cases. (D)</p> Signup and view all the answers

What is a 'Type I error' in statistical hypothesis testing?

<p>Incorrectly rejecting a true null hypothesis (C)</p> Signup and view all the answers

What is the definition of 'omitted variable bias' in statistical analysis?

<p>Ignoring an important variable that affects both independent and dependent variables (D)</p> Signup and view all the answers

In program evaluation, what distinguishes 'stochastic effects' from 'design effects'?

<p>Stochastic effects are random variations, while design effects are errors introduced by study structure. (C)</p> Signup and view all the answers

What is the primary challenge in evaluating full coverage programs compared to partial coverage programs?

<p>Full coverage programs lack a natural control group. (B)</p> Signup and view all the answers

Signup and view all the answers

Flashcards

Confirmation Bias

Seeking, interpreting, and remembering information that confirms pre-existing beliefs while ignoring contradictory evidence.

Deductive Reasoning

A logical process where a general premise leads to a specific conclusion.

Induction

Reasoning from specific cases to general principles.

Bias Against Negative Results

Tendency in research to favor publishing positive results over null or negative ones.

Signup and view all the flashcards

Independent Variable (IV)

The factor manipulated in a study.

Signup and view all the flashcards

Dependent Variable (DV)

The outcome being measured in a study.

Signup and view all the flashcards

Falsification

Testing hypotheses by trying to disprove them.

Signup and view all the flashcards

Falsifiability

A theory must be testable and capable of being proven false to be scientific.

Signup and view all the flashcards

Internal Validity

The degree to which a study establishes a causal relationship between variables without outside influence.

Signup and view all the flashcards

External Validity

How well study results apply to broader contexts beyond the test group.

Signup and view all the flashcards

Counterfactuals

Alternative scenarios to compare what could have happened.

Signup and view all the flashcards

Random Selection

Participants are chosen randomly (used in surveys for representativeness).

Signup and view all the flashcards

Purposive Sampling

Participants are chosen based on specific characteristics relevant to the study.

Signup and view all the flashcards

Randomization

Ensuring participants have an equal chance of being placed in any study condition to eliminate bias.

Signup and view all the flashcards

Nominal Variable

Categorical data without numerical meaning.

Signup and view all the flashcards

Ordinal Variable

Ranked data without equal intervals.

Signup and view all the flashcards

Interval Variable

Ordered data with equal intervals but no true zero.

Signup and view all the flashcards

Ratio Variable

Like interval, but with a true zero.

Signup and view all the flashcards

Null Hypothesis

The default assumption that there is no effect or relationship.

Signup and view all the flashcards

Double-Barrelled Hypothesis

A hypothesis containing two statements that should be separated.

Signup and view all the flashcards

Teleological Arguments

Explanations that assume things exist for a purpose rather than being a result of other factors.

Signup and view all the flashcards

Dyads as a Unit of Analysis

Studying relationships (interactions) between two actors.

Signup and view all the flashcards

Mismatch in Levels of Analysis

When the unit of study does not match the level of explanation.

Signup and view all the flashcards

Ecological Fallacy

Applying group-level findings to individuals.

Signup and view all the flashcards

Difference-in-Difference

A method to compare changes over time between treatment and control groups.

Signup and view all the flashcards

Study Notes

  • The test assesses understanding of research methods and the ability to interpret examples with multiple choice questions.

Understanding Key Concepts

  • Confirmation Bias: Seeking, interpreting, and recalling information that aligns with existing beliefs, while dismissing contradictory evidence.
  • Deductive Reasoning: A logical process moving from a general premise to a specific conclusion.
    • Example: "All humans are mortal. Socrates is human. Therefore, Socrates is mortal."
  • Induction: Reasoning from specific observations to general principles, opposite of deduction.
    • Example: "Every swan I’ve seen is white; therefore, all swans must be white."
  • Bias Against Negative Results: The inclination to publish positive research results over null or negative ones.
  • Independent Variable (IV): The factor manipulated in a study.
    • Example: A new teaching method.
  • Dependent Variable (DV): The outcome being measured.
    • Example: Student test scores.
  • Falsification: Testing hypotheses by attempting to disprove them.
  • Falsifiability/Unfalsifiability: A scientific theory must be testable and capable of being proven false.
    • Example: "All swans are white" is falsifiable; "Ghosts exist" is unfalsifiable.
  • Internal Validity: The extent to which a study establishes a cause-and-effect relationship between variables, free from outside influence.
  • External Validity (Generalizability): How well study results can be applied to broader contexts beyond the test group.
  • Counterfactuals: Alternative scenarios used to compare what might have happened.
    • Example: "Would the Cold War have ended the same way if the USSR had stronger leadership?"
  • Random Selection: Participants chosen randomly, often used in surveys for representativeness.
  • Purposive Sampling: Participants chosen based on specific characteristics relevant to the study.
  • Randomization: Random assignment of participants to treatment and control groups ensures equal chance in any study condition, reducing bias.

Types of Variables

  • Nominal: Categorical data without numerical meaning.
    • Example: Gender, ethnicity.
  • Ordinal: Ranked data without equal intervals.
    • Example: Military ranks.
  • Interval: Ordered data with equal intervals but no true zero point.
    • Example: Temperature in Celsius.
  • Ratio: Similar to interval, but includes a true zero point.
    • Example: Income, height.
  • Null Hypotheses: The assumption of no effect or relationship as a default.
    • Example: "There is no difference in test scores between students who study alone vs in groups."
  • Double-Barrelled Hypothesis: A hypothesis containing two statements that should be separated.
    • Example: "Social media increases political engagement and political polarization."
  • Teleological/Functionalist Arguments: Explanations assuming things exist for a purpose rather than resulting from other factors.
    • Example: "Democracies exist because they create stability."
  • Dyads as a Unit of Analysis: Studying relationships (interactions) between two actors.
    • Example: Country pairs in international conflict studies.
  • Mismatch in Levels of Analysis: Occurs when the unit of study does not align with the level of explanation.
    • Example: Explaining global trends based on individual behavior.
  • Ecological Fallacy: Applying group-level findings to individuals
    • Example: "Country A has a higher average income, so all people in Country A must be wealthy.".
  • Difference-in-Difference: A method to compare changes over time between treatment and control groups.
  • Reductionism: Oversimplification of complex issues.
    • Example: “All wars are caused by economic problems."
  • Mill's Method of Difference: Comparing cases where only one factor differs to determine causation.
  • Overdetermination Problem: Multiple factors contribute to an outcome, making it harder to identify the main cause.
  • Degrees of Freedom: The number of independent values in a statistical calculation.
  • Confounding Factors: Hidden variables that affect results.
  • Control Variables: Factors held constant to isolate the effect of the independent variable.
  • Selection on the Dependent Variable: Analyzing cases where the outcome occurs.
    • Example: Studying only successful revolutions to determine their causes.

Strengths and Weaknesses of the Case Study Method

  • Strengths:
    • Depth and detail provides a rich, in-depth understanding of a case.
    • Good for generating hypotheses to develop new theories.
    • Explorations of unique events is useful for understanding exceptional cases.
      • Example: Cuban Missile Crisis.
  • Weaknesses:
    • Overdetermination: Too many possible causes for one outcome.
    • Limited generalizability: Findings may not apply to other cases.
    • Omitted variable bias: May miss key variables that influence results.
    • Difficult to measure impact of one factor because many events have multiple causes.

Examples

  • Using a case study approach to study the 1994 Rwandan Genocide may give insights into ethnic tensions, but it would not explain whether similar tensions always lead to genocide.

Strengths and Weaknesses of Large-N Statistical Approaches

  • Strengths:
    • Generalizability: Findings can be applied to a larger population.
    • Isolates effects of variables: Uses statistical controls to separate the impact of different variables.
    • Detects patterns: Detects trends that case studies might miss.
  • Weaknesses:
    • Lack of context: May miss nuances of individual cases.
    • Data limitations: Requires high-quality, comparable data across cases.
    • Cannot capture process: Shows correlation but struggles with identifying causal mechanisms.

Examples

  • Large-N study on democracy and economic growth can show that democracies tend to be wealthier, but it might miss why this happens in some cases and not others.
  • Type I Error (False Positive): Incorrectly rejecting a true null hypothesis.
  • Type II Error (False Negative): Failing to reject a false null hypothesis.
  • Value: indicates the probability that the observed results occurred by chance.
  • Omitted Variable Bias: Ignoring an important variable that affects both independent and dependent variables.
  • Use and Misuse of Historical Analogy: Comparing past events to current ones without considering key differences.

Measuring the Net Effect of a Program

  • Net Effect: Difference in outcomes between treatment and control groups.
    • Measured using Difference-in-Difference approach:
        1. Baseline: Measure before the program.
        1. Treatment: Implement the program.
        1. Post-treatment: Measure after the program.
        1. Compare the change in the treatment group versus the control group.
    • Formula:
      • Net Effect = (Post-treatment - Pre-treatment)Treatment Group - (Post-treatment - Pre-treatment)Control Group

Examples

  • Aim of job training program: A job training program aims to increase employment. Compare the treatment group (attended program) and the control group (did not attend).
    • If employment increased more for the treatment group, the program likely had a positive net effect.
  • Selection Effects: Bias introduced when participants are not randomly chosen.

Stochastic versus Design Effects

  • Stochastic Effects: Random variation that affects results.
    • Example: Some people randomly get better health outcomes even without a program.
  • Design Effects: Errors introduced by how a study is structured.
    • Example: Selection bias in results if participants self-select into a program.
  • Importance:
    • If a program's effects are actually due to random chance, it might not be truly effective.
    • Good research design controls for these effects.
  • Partial Coverage Programs:
    • Only some people receive intervention.
    • Allows for control group.
    • Example: Pilot education programs in select schools.
  • Full Coverage Programs:
    • Entire target population is exposed to the intervention.
    • Lacks natural control group, making evaluation harder.
    • Example: National policies like Medicare in the USA (everyone in Canada is covered by Universal Healthcare)
  • Uniform Full Coverage: Everyone receives the same level of intervention.
    • Example: The Criminal Code applies equally across Canada.
  • Non-Uniform Full Coverage: People receive intervention at different levels
    • Example: Welfare benefits vary by province.

Identifying IVs and DVs in Theories

  • Identify which variable is the cause and which is the effect.
  • Restate theories as testable hypotheses, including a null hypothesis.
  • Example
    • Hypothesis: "Democracy reduces corruption."
    • IV: Level of democracy.
    • DV: Level of corruption.
    • Null hypothesis: “Democracy has no effect on corruption.”

Logical Fallacies

  • Logical fallacies are errors in reasoning that undermine arguments and understanding them is crucial for critical analysis of research, policy arguments, and political discourse.
  • Post Hoc Ergo Propter Hoc (“After this, therefore because of this”): Assumption that if one event follows another, then the first event caused the second.
    • Fallacy: Correlation does not imply causation.
    • Example: if a new mayor was elected and crime dropped, it does not necessarily follow that the mayor's election reduced crime.
      • Crime might have dropped due to economic changes, policing strategies, or demographic shifts.
  • Tu Quoque ("You too") - Appeal to Hypocrisy: Deflecting criticism by accusing the opponent of the same behavior.
    • Fallacy: Truth of a claim is independent of who is making it.
    • Example
      • Person A: “Smoking is bad for your health."
      • Person B: “But you smoke too!”
  • Whataboutism - A Form of Appeal to Hypocrisy: Responding to criticism by pointing to someone else's wrongdoing instead of addressing the issue.
    • Fallacy: Two wrongs do not make a right; each issue should be examined independently.
    • Example: You criticize our country's human rights abuses, but what about your country's history of colonialism?”
      • Each issue should be examined independently.
  • Ad Hominem (“Attack on the Man”): Attacking person instead of their argument.
    • Fallacy: Merit of an argument does not depend on who is making it.
      • Example: “You cannot trust his views on climate change–he is a high school dropout!" The person’s education level does not determine the validity of his argument.
  • Ad Hominem Circumstantial - Appeal to Motive: Arguing that biases or circumstances invalidate arguments.
    • Fallacy: Even biased people can be correct.
    • Example "Of course climate scientists warn about global warming-they get paid to study it!" -The fact that they are paid does not invalidate their research.
  • Genetic Fallacy: Dismissing an argument based on its origin rather than its merit.
    • Fallacy: Source of an argument does not determine its validity
      • Example: "That law was introduced by a corrupt politician, so it must be bad." The law’s merit should be judged on its impact, not who introduced it.
  • Middle Ground Fallacy - Appeal to Moderation: Assuming the middle position between two extremes is always correct.
    • Fallacy: Truth is not always halfway between two opposing views but is determined by compromise:
      • Example
      • Person A: "The Earth is round."
      • Person B: "The Earth is flat."
      • Middle Ground: “Maybe the Earth is a little bit round and a little bit flat?"
  • Slippery Slope: Argue that a small step will inevitably lead to extreme consequences.
    • Fallacy: The future is not predetermined, and intermediate steps can be controlled.
      • Example "If we allow same-sex marriage, next people will be marrying animals!”
  • No True Scotsman: Changing the definition of a group to exclude counterexamples.
    • Fallacy: This moves the goalposts, making the claim unfalsifiable.
      • Example: "No true Christian would ever commit a crime.” If a Christian commits a crime, they do not stop being Christian and this assumes a chain of events with no logical basis.
  • Special Pleading: Making an exception to a rule without justification.
    • Fallacy: Rules should apply consistently and personal circumstances do not change the moral principle.
      • Example: "Stealing is wrong-except when I do it because I really need the money."
  • Sunk Cost Fallacy: Continuing an action only because resources have already been invested.
    • Fallacy: Decisions should be forward-looking, not based on past costs.
      • Example: "We have already spent billions on this war–we cannot pull out now!”
        • If the war is unwinnable, continuing wastes more resources.
  • Bandwagon - Appeal to Common Practice: Arguing that something is true or right because many people believe it.
    • Fallacy: Popularity does not equal correctness.
      • Example: "Millions of people use homeopathy, so it must work!”
        • Scientific evidence determines effectiveness, not popularity.
  • Argumentum ad baculum - Appeal to the Threat of Force: Using a threat of violence or consequences to support an argument.
    • Fallacy: Fear is not a valid argument since threats do not prove policy is good.
      • Example: "If you do not support our policies, you will be fired!"
  • False Analogy/False Equivalence: Comparing two things as if they are the same when they are actually different.
    • Fallacy: Analogies must compare things that are truly similar.
      • Example: "Owning a gun is like owning a car–both require responsibility."
        • Cars are designed for transport, while guns are designed to inflict harm.
  • Appeal to Ignorance: Arguing something is true because it hasn't been disproven.
    • Fallacy: Lack of evidence does not prove anything.
      • Example: "Nobody has proven aliens do not exist, so they must exist!”
        • The burden of proof is on the person making the claim.

False Dichotomy

  • Presenting only two options when others exist.
    • Fallacy: Reality is often not black and white.
      • Example: "“You are either with us or against us!!!”
  • Rhetoric of Reaction identified by Hirschman are three common types of reactionary arguments which are used to resist change, and three common progressive rhetorical fallacies. These patterns appear repeatedly in political debates over reforms.

Reactionary Arguments

  • Perversity: Claims that solving a problem will worsen it.
    • Fallacy: Assumes all reforms backfire without actual evidence.
      • Example: Social welfare programs create more poverty without empirical evidence and minimum wage laws hurt workers
  • Jeopardy: Argues reform undermines something valuable.
    • Fallacy: Often exaggerates negative consequences without proof like that democracy threatens individual liberty and the welfare state leads to authoritarianism.
  • Futility: Claims a proposed reform will accomplish nothing.
    • Fallacy: Assumes change is impossible and overlooks incremental progress.
      • Voting rights, for example, won’t change who holds power, while climate change policies won’t work either
  • Progressive Rhetoric (Counter Arguments): Rhetorical fallacies used by progressives to justify reform and synergy
    • Assumes all positive reforms reinforce each other without trade-offs however many policies involve trade-offs and unintended consequences and a “historic inevitability.”
  • Assumes that history follows a predetermined path of progress.
    • Fallacy: Social change is not inevitable therefore it depends on choices, conflicts, and external factors,
  • Desperate Predicament (“Racial Change is the Only Option”): Argues that all previous reforms have failed, so only extreme solutions will work.
    • Why is it wrong therefore it overlooks gradual improvements and alternative solutions.

Understand the Concepts Associated with Causality/Causation.

  • Deterministic Cause: If the cause occurs the effect will always follow.
    • Example: Dropping a glass from 2m will always cause it to break.”
  • Probabilistic Cause: Increases the likelihood of the effect, but the effect may not always occur.
    • Example: Smoking increases the probability of lung cancer, but not all smokers get cancer.”
  • Why does this matter.
    • In social sciences, most relationships are probabilistic.
      • Example: Education increases the probability of higher income, but does not guarantee it.
  • Establish a cause and effect relationship requires 3 conditions and a temporal precedence includes cause must occur before the effect
    • Example: A rise in CO2 emissions occurs before an increase in global temperatures.”
  • Covariation between correlation and causation while ruling out alternative explanations. Other possible causes must be ruled out for proving causation by finding.
  • Best Method for Proving Causation with true experimental design like randomized control trials randomly assigning patients to treatment vs placebo groups.
  • Causation is an variable which directly influences the other.
    • In Summer ice cream sales correlate to drowning deaths and lung cancer is therefore correlated with smoking

Summary of Key Takeaways on Causality

  • Deterministic vs. Probabilistic Cause with best RCTs for establishing Causation versus correlation that is just correlated and a Spurious correlation
  • A circular reasoning for when where IV and DV are tautologies.

Research and evaluation designs

  • One-shot Case Study with no clear pretest and is a single observation after treatment.

Research Study

  • One-group pretest-posttest measures the effects of treatment but has no control group

  • Studies done with no random assignments of participants for comparison

  • Natural or post/pre treatment. Randomized testing for randomized and measured results with minimal bias. The best study to establish causality by removing baseline differences with the best studies in measuring

  • Which research design a study falls under should be random with a

  • YES to true experimental design but a

  • NO for other testing methods with measured groups or without

  • Evaluation: Random groups for data

  • Final thoughts will show the random of tests Schematic Representations of Research Designs.

  • Threats to data will cause undermined results. Tests will need to have the standard measure.

Associated Influences

  • Study of differences and other testings and study. These types will make different results.

  • Longitudinal for designs need different and more interpretable test of bias.

  • Data will very depending of different cases depending of policies and program

Key Takeaways

  • Partial Coverage Programs can use RCTs, regression discontinuity, and quasi-experiments
  • Full Coverage Programs require require time series or before-after designs with no control group if there is no existing

Types of Case Studies

  • Atheoretical: Explore the case without testing theory with a clear limitation for cases that do not confirm and can be bias to the choose
  • The least is for testing and trying a theory to see if it works and to use testing with theory
  • Best testing for a theory for expected test with limitations and how
  • Select outliers by revising and testing for testing to revise them

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser