Podcast
Questions and Answers
Which of the following is the correct order of basic steps in the scientific method?
Which of the following is the correct order of basic steps in the scientific method?
- Generate a hypothesis, make a prediction, observe an aspect of the universe
- Observe an aspect of the universe, generate a hypothesis, make a prediction (correct)
- Make a prediction, observe an aspect of the universe, generate a hypothesis
- Generate a hypothesis, observe an aspect of the universe, make a prediction
Which attribute of science is focused on using information recorded from observation rather than theory or pure logic?
Which attribute of science is focused on using information recorded from observation rather than theory or pure logic?
- Uses verifiable data (correct)
- Uses systematic observation and analysis
- Uses theory
- Logical reasoning
Which type of question seeks to describe facts about the world?
Which type of question seeks to describe facts about the world?
- Empirical
- Factual/Procedural (correct)
- Hypothetical
- Normative
What is the primary goal of a good empirical question?
What is the primary goal of a good empirical question?
Which example demonstrates a well-formulated empirical question?
Which example demonstrates a well-formulated empirical question?
What is the purpose of a theory's attributes in research?
What is the purpose of a theory's attributes in research?
If a theory is correct, observable implications should allow us to do what?
If a theory is correct, observable implications should allow us to do what?
What is the purpose of a causal mechanism in a theory?
What is the purpose of a causal mechanism in a theory?
What do assumptions provide for a theory?
What do assumptions provide for a theory?
What is the potential pitfall of inductively building a theory from existing data?
What is the potential pitfall of inductively building a theory from existing data?
What does an ecological fallacy refer to?
What does an ecological fallacy refer to?
What is the purpose of controlling for confounding variables when establishing a causal relationship?
What is the purpose of controlling for confounding variables when establishing a causal relationship?
What is the role of empirical indicators in operationalization?
What is the role of empirical indicators in operationalization?
If a survey consistently gives similar results when taken by the same people over time, what does this indicate?
If a survey consistently gives similar results when taken by the same people over time, what does this indicate?
What is the Hawthorne effect in the context of experiments?
What is the Hawthorne effect in the context of experiments?
Flashcards
Scientific Method
Scientific Method
A systematic way of asking questions, observing, hypothesizing, and predicting about the universe.
Inductive Reasoning
Inductive Reasoning
Reasoning from data to a theory
Deductive Reasoning
Deductive Reasoning
Reasoning from a general principle to data
Uses Theory
Uses Theory
Signup and view all the flashcards
Verifiable Data
Verifiable Data
Signup and view all the flashcards
Systematic Observation
Systematic Observation
Signup and view all the flashcards
Good Empirical Questions
Good Empirical Questions
Signup and view all the flashcards
Theory Definition
Theory Definition
Signup and view all the flashcards
Expectation in Theory
Expectation in Theory
Signup and view all the flashcards
Hypothesis
Hypothesis
Signup and view all the flashcards
Ecological Fallacy
Ecological Fallacy
Signup and view all the flashcards
Omitted Variable Bias
Omitted Variable Bias
Signup and view all the flashcards
Natural Experiment
Natural Experiment
Signup and view all the flashcards
Factorial Design
Factorial Design
Signup and view all the flashcards
Theories for survey experiments
Theories for survey experiments
Signup and view all the flashcards
Study Notes
Scientific Method
- It involves observing the universe, forming a hypothesis about a causal relationship, and making predictions.
Four Attributes of Science
- Logical reasoning is how theories are developed, using inductive reasoning from data or deductive reasoning from general principles.
- Science uses theory to explain phenomena, either colloquially by identifying the main cause or technically through interconnected propositions.
- Verifiable data, or information recorded from observation, is used.
- Facts are based on verifiable observation rather than pure logic.
- Systematic observation and analysis involve clear, justifiable steps.
Different Types of Questions
- Factual/procedural questions describe facts.
- Hypothetical questions explore future possibilities.
- Normative questions address how the world should be.
- Empirical questions explore how the world works through causal relationships.
Good Empirical Questions
- Effective questions ask why things occur and focus on explaining general patterns.
- They begin with a puzzle, can't be answered with a simple search, and are general rather than specific.
- Good questions avoid too many proper nouns.
Theory
- It answers the research question by explaining why something happens, simplifying reality by removing irrelevant factors.
- Expectations in a theory causally relate explanatory factors to outcomes.
- A hypothesis tests the relationship between independent (IV) and dependent (DV) variables.
Variables
- An independent variable (IV) affects the dependent variable (DV).
- A dependent variable (DV) is affected by other variables.
- Variables can have various values and impacts.
Observable Implications
- They are predictions that should be true if a theory is correct.
- They are derived from the theory, and if not true, can falsify the theory.
- The IV influences the DV, such as democracy reducing war.
- The DV influences the IV, which is rare.
- A causal mechanism explains the causal relationship, detailing the steps of how changes in the IV affect the DV with specific, measurable effects.
Assumptions
- They provide the implicit conditions that must be valid for the theory to make sense, defining the underlying causes.
- Definitions set the foundation of a theory, which may not be directly testable.
- Assumptions often include identifying actors involved (voters, leaders) and their motivations (self-interest, greed).
Comprehensive Example
- A theory posits that easier voting access increases voter turnout.
- The assumption is people are more likely to vote if it’s easier.
- The prediction is policies like early voting increase voter turnout.
- The observable implication is states with easier voting measures should have higher voter turnout.
- The causal mechanism is reduced costs of voting leading to fewer logistical and time-related barriers.
Scope Conditions
- They define when and where the theory applies, considering temporal (specific time period) and spatial (specific place) factors.
How to Build a Theory
- Inductively, build from existing data by finding patterns and formulating a fitting theory, but test on new data to avoid fitting the data too closely.
- Deductively, develop a theory then test it by generating a hypothesis, gathering data, and testing the theory.
What You Need to Make a Theory-Research Design
- A research plan to collect and analyze data, including identifying the unit of analysis, which are the cases you study or the unit of observation.
- Determine the subject involved in the outcome (DV) and the units for the IV and DV, such as individuals or states.
Ecological Fallacy
- A problem that occurs when drawing an inference about an individual based on aggregate data for a group, which can lead to picking the wrong unit or people for analysis.
Causal Relationships
- Deterministic laws state if X then Y.
- Pyroclastic theories describe average effects.
- Multiple causes can lead to outcomes, and in a bivariate relationship, one variable (X) causes another (Y).
- Multivariate relationships involve more than two variables where multiple factors contribute to the outcome (X causes Y).
Four Hurdles of Establishing a Causal Relationship
- Establish a correlation between X and Y.
- Rule out reverse causation, ensuring Y does not cause X, as time cannot be reversed.
- Ensure a credible causal mechanism exists.
- Control for confounding variables.
- A confounding variable (Z) is correlated with both the independent variable (X) and dependent variable (Y), altering their relationship.
- Control for confounders by holding the Z factor constant across multiple cases.
Measurement
- Review literature on the concept before developing and clarifying concepts by defining the IV and DV.
- Operationalization involves assigning numbers/values to variables based on definitions.
- Empirical indicators are physical, observable characteristics used to assess variables numerically, such as assigning a frown a value of 1 on a scale of 1-10 for sadness.
- After identifying empirical indicators, procedures are specified for applying them.
- Produce an operational definition or producing a detailed description of procedures necessary to assign units of analysis.
Types of Data Sources
- Verbal self-reports are based on respondent answers from interviews or surveys.
- Direct observations involve observing and recording a behavior or outcome.
- Archival records include stats, public/private documents, and newspaper reports.
- Measurement error is the gap between the measure and the actual concept due to poor operational definitions and unclear measurement procedures.
Validity and Reliability
- Reliability is consistency, dependability, and the ability to produce the same results when repeated, but it does not ensure validity.
- An indicator is reliable if it produces the same result when used by different people or repeated under the same conditions.
- test-retest reliability measures the same thing under different circumstances.
- Internal consistency assesses agreement within an index or scale.
- Intercoder reliability measures the extent to which different observers get equivalent results.
- Validity requires reliability and ensures the indicator captures the intended phenomenon accurately.
- An example includes political ideology surveys asking relevant questions about economic and social policies.
- Convergent validity compares measures against others measuring the same thing, aiming for matching results.
- Construct validity ensures the measure corresponds theoretically to what it is trying to measure.
- Internal validity assesses whether the cause has been accurately defined and if other factors could be responsible, requiring accurate definitions of the IV and what affects the DC.
- External validity measures the extent to which findings can be generalized to other settings.
Problems
- The fundamental problem of inference is that only two things can be observed with certainty.
- Inferential power, related to experiments, measures the probability of detecting a true effect.
- Randomization is the best approach to ensure treatment and control groups are the same except for the intervention.
- Spurious correlation occurs when thinking something is related but it is not correlated, often due to failing to control confounders or from observational research.
- The compliance problem is the difference between assigned treatment and actual receipt of the treatment.
Experiments
- Field experiments.
- Omitted variable bias occurs when a relevant variable affecting the IV and DV is left out of the analysis.
- Natural experiments exist in two kinds.
- Observation.
- Observation occurs when the IV naturally arises.
-An example is when democracy of a country nature assigns IV.
- Its challenges are ruling out cofounders.
- Observation occurs when the IV naturally arises.
-An example is when democracy of a country nature assigns IV.
- The non-observation type or lab experiments.
- Observation.
- Steps for experiments.
- Randomly assign subjects for IV.
- rule out reverse causation from Y to X. -Control cofounders Z.
- Finding causal mechanism for X to Y.
- The basis of experiments.
- treatment group is exposed to something.
- The control group is the opposite and is not exposed.
- If it is done well then you can rule out co founders and reverse causation.
- The basics consists of creating two groups, treatment and control.
- Designs between subjects is one group gets treatment and the other gets nothing.
- Within subjects.
- Factuorial design measures the individual effects when multiple variables change simultaneously. -levels are the different comditions for each factor
Types of experiments
- Lab experiments recruit people to labs where you can manipulate treaments to maximize internal vadility.
- The often concerned with behavior than belifs.
- Survey experiments. Field a study and randomly assgin and manipulate some aspect of the survey.
- Randomly assign people into two different groups, each recieving their different versions.
- Could add # values to determine results.
- Field experiements randomly assign people a tratement in the world and is combined with iternal and external validity
- Hawthorne effect.
Hawthorne Effect
- The Hawthorne effect shows that being studies changes behavior, which messes with internal validity.
- An example includes get out the vote.
- Sample. A represented portion taken from larger group.
Types of Sample
- Selection bias is not choosing the right sample which is imporant to get accurate results.
- Probability Sampling (Random Sampling)
- These methods ensure that every individual in the population has a known chance of being selected, which helps generalize results.
- Simple Random Sampling (SRS). This is where every individual in the population has an equal chance of being selected .
- Probability Sampling (Random Sampling)
- Systematic Sampling This is when you select every nth individual from a list.
- Stratified Sampling This is when you divide the population into subgroups and randomly select to ensure representation.
- Cluster Sampling – Dividing the population into clusters and randomly selecting entire clusters instead of individuals.
- Non-Probability Sampling These methods do not give all individuals in a population a known chance of selection, which can introduce bias but are useful in exploratory research.
- Convenience Sampling Selecting subjects who are easiest to reach.
- Purposive (Judgment) Sampling Selecting specific individuals based on characteristics relevant to the study
- Quota Sampling ensuring that specific groups are represented by selecting a fixed number of respondents from each group.
- Snowball Sampling – Used for hard-to-reach populations where participants recruit others.
- Design=set up conditions and how you will carry out the experiment.
- Audit study=resume study,examines racial forms of discrimination in employment opportunities.
- Bundled treatments in experiments-
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.