Questions in PO Intro to Research PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document discusses the scientific approach to politics, covering empiricism, determinism, and objectivity. It explores the use of evidence in assessing arguments and the importance of intersubjectivity in research. The document also touches upon normative and empirical analysis.

Full Transcript

Chapter 1 – Science and the Study of Politics – Summary In your classes and your daily life, you regularly encounter arguments: positions that are supported by reasons These arguments seek to persuade you about something: that a description or explanation is accurate, th...

Chapter 1 – Science and the Study of Politics – Summary In your classes and your daily life, you regularly encounter arguments: positions that are supported by reasons These arguments seek to persuade you about something: that a description or explanation is accurate, that and action is justified At other times, the reasons given to support an argument are based on evidence – that is, information observed and measured in the world – and assessing the argument critically requires assessing the quality and relevance of the evidence An aim of the scientific approach to politics is to use critical thought as a guide to our perceptions of the political world As Isaac (2013) writes, “All of the sciences – physics and chemistry and sociology and economics as well as political science-generate huge amounts of high-level theoretical scholarship that uses specialized language and concepts and addresses problems in ways that will not seem self-evident to citizens are not scientists. That is what is.” Normative and Empirical Analysis: Normative analysis, the realm of political theory and philosophy, is prescriptive in nature and puts forward arguments and how society and political life should be Empirical research seeks to base its arguments on evidence obtained from observation and measurement of the physical and social worlds Intersubjectivity is important, as it demonstrates that findings are not isolated to a particular researcher/research team, research approach, or context The Evidence Continuum: The Scientific Approach: We need a set of agreed-upon principles and rules to guide our research and to assist us in assessing the research done by others. These agreed -upon principles and rules are science As its root, science is a set of beliefs about the natural world (and epistemology, or approach to knowledge) and a corresponding set of rules (a methodology, or way of obtaining knowledge) that help us understand the world When applied to the social world, the scientific epistemology is often referred to as positivism, an approach that can be traced back to nineteenth-century sociologist Auguste Comte Core Beliefs of the Scientific Approach: Empiricism: First, there is a core belief of empiricism: that knowledge is derived from real-world observation, rather than being derived a priori or by intuition Empiricism motivates us to measure and understand the political world, and to extend and generalize our understandings through the formulation of theories, which are integrated sets of explanations about the political world Determinism: Second, there is the core belief of determinism, the idea that everything has a cause Objectivity: A third core belief is the importance of objectivity, the belief that science should create an accurate representation of reality Replication: The final core belief is replication, the belief that knowledge is acqured through a continuous application of the scientific method – that is, repeated observation, careful testing, and a proximate duplication of the results under varying conditions The Scientific Approach versus the Scientific Method: Limitations of the Scientific Approach in Politics: Interpretivism: Positivism is sometimes contrasted with interpretivism A fundamental principle of interpretivism is that it is not possible, and may not even be desirable, to try to separate observers from their observations The Scientific Process: The scientific process can be thought of as having three broad steps: pose a reseach question, gather empirical evidence to answer the reseach question, and communicate the results Pose a Reseach Question: Scientific research starts with a question. Some research questions are more descriptive in nature: How have political party finance rules changed over time? Do men and women differ in support for universal child -care policy? Which political actors opposed military intervention in a particular conflict? Other research questions are more explanatory in nature: Why do some people support economic development through extraction of non- renewable resources while others are opposed? Under what conditions does conflict escalate into war How did the context influence the outcome? Gather Empirical Evidence: The evidence that a researcher collects to answer a research question is referred to as data. (Note that data us a plural noun; if there is a single of evidence, we use the word datum) Data can be in the form of texts (which includes written words, images, and audio files) or numbers (such as counts, percentages, and so forth) The research question also points the researcher to the different research design strategies that can be used in the data collection phase Research design strategies vary in the number of cases included with the cases being single units of the object or population of study A small-N study is one with a small number of cases and large-N study is one with a large number of cases Qualitative research involves the inclusion of a small number of cases and textual data Quantitative research involves the inclusion of a large number of cases and the collection of numerical data Other research questions may require conducting multi-method research, a research approach in which research teams use a series of different data collection methodologies While still other research questions may require mixed methods research, a research approach in which researchers integrate qualitative and quantitative research approaches As we move through Part I of this text, we will discuss what to look for with respect to measurement, causality, and generalizability when reading research In Part II, we discuss at length specific data collection methodologies (textual analysis, interviews survey research, experimental research, and other data collection strategies) and identify additional questions to ask a reader In Part III, we discuss how researchers use conceptual analysis techniques with qualitative data and statistical techniques and quantitative data Communicate the Results: Typically, it is expected that the research goes through some form of peer review process prior to publication, and peer-reviewed research is valued more highly than non-peer- reviewed research Reading and Writing Political Science Research, Chapter 2 – Summary The Abstract or Executive Summary: For academic work, including journal articles, chapters in edited books, and university research paper, this takes the form of an abstract: a 100-250-word summary of the report’s research question, methodology, key findings, and implications For non-academic work, such as government, non-profit, and industry reports, this snapshot takes the form of an executive summary, which is a short summary (typically 1- 5 pages) of the same information Literature Review: Research reports written for academic audiences, such as academic journal articles and university course assignments, must include a literature review, which is a highly focused presentation of the existing academic researching that is directly relevant to the research question and the report’s lime of argumentation Research Design: Remember, intersubjectivity is one of the keyways we guard against bias; thus, all research must be transmissible (that is, the methods can be explained) and replicable (that is, another person could conceivably attempt to do a similar study and draw similar conclusions) The Peer Review Process: Numerous predatory journals, which intentionally engage in deceitful and unethical practices to suggest they are peer-reviewed, publish work that often fails to meet even the most basic standards of sound scientific research Grey literature is not peer-reviewed, but unlike predatory journals, it does not purport to be peer-reviewed; examples include government reports, think-tank and non- governmental agency reports, conference proceedings, and industry sector reports Presentation of Findings: Finally, researchers should keep in mind the potential importance of null findings, instances where the results do not match expectations, and the data do not support research hypothesis (the stand expectation being tested in a research article References: Topic 1 – Introduction to Research – September 10, 2024 Overview: Canadian Journal of Political Science (CJPS) What is social science research? Two dominant forms of political analysis: Normative and Empirical (Observation from various sources) The scientific approach and the scientific method Limitations and critiques of the scientific approach Intersubjectivity = transmissible (step by step instructions) and replicable (Replication fo the same process) and the Evidence Continuum The scientific research process Qualitative and quantitative research Key terms What is social science research? - As defined by the Federation for the Social Sciences and Humanities, “...the social sciences are fields of study that may involve more empirical methods [compared to the humanities] to consider society and human behaviour, including – but not limited – anthropology, archelogy, criminology, economics, linguistics, political science and international relations, society, geography, law and psychology.” Two domonany forms of poltical anaalysis: Normative and Empirical Normative Analysis: Prescriptive Ideals, value judgements Empirical Analysis: Descriptive, explanatory Observation Normative Analysis: - Prescriptive in nature - What is right or wrong? - What should be done? - Based primarily on reason and logic - Is accepted or rejected based on whether the premises, reasons, and logic stand up upon evaluation - In political science, usually part of the subfields of political theory and philosophy - These value judgements can be as noncontroversial as “democrat is good,” or as controversial as “taxation is theft.” It is impossible to make a normative argument without stating a position based on values. Empirical Analysis: - Descriptive or explanatory in nature - What is it? - Why does it happen like that? - Based on knowledge obtained from observation or measurement of the world (physical or social) - Is accepted or rejected based on the existence and quality of the evidence - Empirical researchers may have normative commitments that are a part of why they choose their research questions or how they carry out their work! But their empirical work is judged on its evidence, not on those commitments. The scientific approach and the scientific method: Science: - “...a set of beliefs about natural world (an epistemology, or approach to knowledge) and a corresponding set of rules (a methodology, or way of obtaining knowledge) that help us understand the world.” (Berdahl and Roy, 2021, p. 9) - Science is not what you study, but how you study it. - If a study is done according to the rules of science, it is science - “Science is not what you study, but how you study it. - If a study is done according to the rules of science, it is science - “The content of “science” is primarily the methods and rules, not the subject matter, since we can use these methods to study virtually anything.” - King, Keohane, and Verba (1994), Designing Social Inquiry. Princeton University Press.” - The scientific approach is often called positivism (contrasted with Interpretivism) - Epistemology: an approach or understanding of knowledge. - What is knowledge? How do we know something? - Methodology: a way of obtaining knowledge. - If I gather information in this way, it will be valid according to my understanding of what knowledge is. The scientific approach: Core beliefs of the scientific approach: - Empiricism: that knowledge is derived from real world observation, rather than being derived from theoretical deduction or by intuition - Determinism: everything has a cause that we can find - Objectivity: science should create an accurate representation of reality - Replication: scientific knowledge is cumulative, so we need to repeat research repeatedly to make sure it’s true Because of empiricism... - Social scientists gather information about the world by observing it, collecting empirical information - Social scientists continually work to improve how they measure and gather information about the world, developing new tools and techniques that will make their observations more accurate Because of determinism... - Social scientists don’t only describe the world but make causal arguments about how it works. - Social scientists focus on questions of cause and effect...but usually think of them as probabilistic, because of the nature of social life Because of objectivity... - Social scientists work to reduce the influences of their own biases on their work. - Social scientists likely still have normative beliefs that affect their choices of questions or desired outcomes, but their goal is not to prove themselves right but determine what is happening. Because of replication... - Social scientists explain in detail how they got their results, so that other social scientists can judge their methods and repeat their research. - Social scientists make their data public, when possible, so that other researchers can check their results. Limitations of the scientific approach: - The scientific approach does have limits. - Those who use it think it is valuable despite those limits, but that doesn’t mean it can be everything. - It’s hard to empirically measure lots of important things - We must ask people what they think, and we can’t be sure that they’re telling the truth, or that one person and another mean the same thing when they give the same answer - True objectivity is impossible - There is always a chance that someone’s biases and preferences will affect their research, even when they attempt to limit it Intersubjectivity and the Evidence Continuum Intersubjectivity - Independently conducting research on the same issue, to check the results against each other - Demonstrates that findings are not isolated to a particular researcher/research team, research approach, or context When multiple studies demonstrate the same or similar findings, our confidence in the findings increases. - For evidence-based policymaking, this can help deal with the challenges of competing evidence. - For most social scientists, evidence-based policymaking is best when based on not just one piece of research, but a body of research that has been replicated and meets the conditions of intersubjectivity. Intersubjectivity and the Evidence Continuum The scientific research process: There are three steps to conducting research using the scientific approach: 1. Propose a Research Question - Your question can be descriptive (what?) or explanatory (why?) - Before you frame a research question, you should look to see what other researchers have said – the “state of the literature” - Remember – Intersubjectivity! Replication! 2. Gather Empirical Evidence to Answer the Research Question - This evidence is usually referred to as data (singular: datum) - Data can come in the form of texts or numbers. - What type of data you use determines the type of analysis you can do with it. - Small n-studies vs large-n studies - Qualitative vs. Quantitative analysis (next slide) - Multimethod or mixed-method research Qualitative and quantitative research: - Qualitative research: small-N study; rich detail; context specific - Quantitative research: large-N study; broad patterns; generalizations - “Until recently, heated debates between quantitative and qualitative researchers constituted one of the deepest divisions in the political science community.” - Jared Wesley The scientific research process: 3. Communicate Your Results - Communication is essential to the work of social science - The goal of communication is the sharing of information and the advancement and refinement of knowledge - For professors and other senior researchers, the best communication outlets are those that are peer-reviewed (evaluated by other researchers prior to publication) - For students and junior researchers, their work is usually communicated with the support of their supervisors Key Terms: - Determinism - Empirical research - Interpretivism - Intersubjectivity - Normative analysis - Objectivity - Peer review Process - Positivism - Replication - Scientific approach to politics Reading Comprehension (Topic 2): Explorations, chapter 2 – Finished Bonus Syllabus Quiz – Finished TCPS 2: Core – Finish by September 24, 2024 – Finished Finish Chapter 3 and chapter 4.1 readings - Finished Chapter 3 – Research Ethics – Summary - Indeed, the most rigorous ethical standards tend to come into play with respect to data collection: the samples we select, the information provided to subjects or respondents, the precautions taken to ensure confidentiality, and the avoidance of risk. Dangers of Unethical Research Practices: - In the early days of medical research, though, subjects were often recruited without their consent, much less informed consent (meaning that research subjects understand the nature of the study and its potential risks) Ensuring Ethical Practice at All Stages of Research: Putting Research Subjects at Risk: The Milgram Studies of Obedience - At times, the risk to research subjects can come through increased self-awareness; we may find out that we are not as nice or compassionate as we thought we were - A good example of this risk comes from Milgram's (1963) famous studies on obedience If the teacher hesitated, he was prodded to continue, with statements delivered in the following sequence: 1. Please continue. 2. The experiment requires that you continue. 3. It is essential that you continue. 4. You have no other choice; you must go on. - As the study went on, “subjects were observed to swear, bite their lips, groan, and dig their fingernails into their flesh” (Milgram, 1963, p. 375); some had uncontrollable seizures - If a subject continued to resist, the experiment was stopped Selecting the Research Topic: - While grant applications are not assessed exclusively based on social relevance, and funding basic research (research that seeks to advance knowledge purely for knowledge’s sake) is part of SSHRC’s mandate, social relevance remains an important factor in the funding formulas - For example, Roberts (2015) argues that the history of eugenics and racially biased policy, combined with the social value placed on intelligence, means that research on genetics and intelligence cannot be socially neutral - Today, research on the genetics-even if the research does not use social classifications- maps onto existing social hierarchies and the stereotypes about intelligence that support them” Justice: - Equity requires distributing the benefits and burdens of research participation in such a way that not segment of the population in unduly burdened by the harms of research of denied the benefits and the knowledge generated from it Research Design: Research Involving Indigenous Peoples of Canada - Failure to share data and resulting benefits; and dissemination of information that has misrepresented or stigmatized entire communities - As a result, Indigenous peoples continue to regard research, particularly research originating outside their communities, with a certain apprehension or mistrust - Deception is not commonly used in political science research, and when it is, it is usually with experimental research designs - Confidentiality means that the researcher knows the research subjects’ identities but ensures that this information is not shared with individuals outside the research team - Anonymity means that the researcher does not know the research subject - Identities and that the research details cannot be combined in a manner that would reveal the subject’ identities Data Collection: - The ethical guidelines governing social science research raise three interconnected lines of defence around data collection from research subjects: privacy, informed consent, and the right to withdraw - Second, research subjects must provide informed consent prior to data collection Big data, Secondary Data, and Research Ethics: - Rather than needing to collect original data, researchers can access both big data (content data from digital and social media) and secondary (data collected by other researchers) Data Analysis - As discussed in Chapter 1, objectivity is a core belief of the scientific approach, and as researchers we strive to minimize the effect of the observer - In quantitative research, the emphasis on statistical significance (discussed in chapter 13) has led to growing concerns about p-hacking, a data manipulation practice in which researchers change their statistical models by selectively including or excluding variables to achieve statistically significant results Research Dissemination: Publication Bias in Quantitative Research - In recent years, academics have raised growing concerns about publication bias, in which quantitative research that provides support for a hypothesis in the form of statistically significant results in more likely to be accepted for peer review publication than research that does not present statistically significant results - To address this, there is growing use of preregistration: researchers submit their research design, hypotheses, and data analysis plan to a registry prior to starting data collection, or prior to starting analysis of a pre-existing dataset - Ideas expressed in a research publication, a public presentation, or even a university lecture class are referred to as intellectual property Research Foundations Theory, Concepts, and Measures – Chapter 4 Summary - A theory argues that there is a relationship between concepts that is, abstract ideas that represent qualities in the world - From this, we can make a prediction, known as a hypothesis, that allows us to test our theory through real-world observations that we use to gather empirical evidence Theory and Political Science Research: - Theory-oriented research, also referred to as basic research, aims to broaden our understanding of political life The Role of Research in Developing, Testing, and Refining Theory: - Theory-building research, also referred to as inductive research or exploratory research, seeks to obtain real-world observations sufficient to develop a simple (parsimonious), generalizable (general), and testable (falsifiable) explanation of the variation of interest; this parsimonious, general, and falsifiable explanation is the theory - Theory-testing research, also referred to as deductive research, is research that deliberately sets out to test the hypotheses established by theory Concepts: - Ultimately, we seek to establish a causal mechanism, which is a plausible explanation of why the concepts are related - Some concepts are categorized as a typology, in which cases are categorized based on their characteristics - Other concepts are categorized based on ordering or ranking along a continuum, in which we order a concept’s values along a dimension, ranging from low to high or from less to more - Concepts have both a label (e.g., regime type, ideology) and a conceptual definition (sometimes referred to as the nominal definition), which is an explicit description of the concept in question - A unidimensional concept has only one underlying dimension (for example, age or height), whereas a multidimensional concept is one in which more than one factor, or dimension, exists within a concept (for example, social status) - Conceptualization that is, the process of choosing the conceptual definition for a study is important for three reasons - The conceptual definition is the standard by which we assess the validity of the operational definition, which is a concrete, measurable version of the concept, and the process of moving from abstract concepts (conceptual definitions) to concrete measures (operational definitions) is known as operationalization Sources of Conceptual Definitions: - At times, researchers employ inductive reasoning using empirical evidence to draw a conclusion to help form the definition of a concept Measurement: Linking Theory to Empirical Study: - What we need is some form of measure, which is a tool by which we obtain observable evidence about our concept of interest Applied Research: - Research directed at finding answers to specific problems, with immediate practical usage, is known as applied research; examples include cost benefit analyses, social impact assessments, needs assessments, and evaluations of existing programs or policies Measures in Qualitative Research: - In qualitative approaches, researchers seek to discover themes, which are recurring patterns of importance to the topic found in the data Measures in Quantitative Research: - Operationalization means moving from a concept to a variable, which is a more concrete representation of the concept that has within its variations, to an indicator, which is how we assign each individual case to the different values of the variable - In these situations, after the data have been collected, indicators are combined into an index, a single measure of the concept or variable in question Moving from Concepts to Variables: - Environmentalism - Political interest - Democracy - Privacy - Gender equity Hypotheses and Casuality: - A hypothesis statement that variables are related speaks to correlation, a state in which two entities change in conjunction with the other - A hypothesis statement of how the variables is related speaks to causality, which is the idea that one event influences another Causality: - The variable believed to be causing the channge is known as the independent variable and the variable believed to change because of the other variable is known as the dependent variable Causality Criterion 1: Correlation - In cases where variations within the concepts can be ranked from low to high (for example, age or income), a positive correlation occurs when the direction of change is the same of each variable (both increase or both decrease), and a negative correlation occurs when the direction of change is inverse (one increases and one decreases) Causality Criterion 2: Temporal Order - For this reason, causal arguments must consider temporal order, which is the time sequence of events - These variables are known as prior conditions, and common sense tells us that they occur Causality Criterion 3: Absence of Confounding Variables: - There is always the possibility that the relationship is spurious, which means the relationship between two variables can be accounted for by a third variable - Confounding variables (also referred to as confounds, confounders, and third variables) are factors that are correlated with both the independent and the dependent variable Evaluating Causal Claims: 1. Correlation 2. Temporal order 3. Elimination of confounding variables 4. Plausible causal mechanism 5. Consistency Hypothesis Testing and Theory: - The default assumption, known as the null hypothesis, is that there is no relationship between the two variables of interest Causal Models and Temporal Order: - The examples we have presented thus far are between two variables, one dependent and one independent, known as a bivariate relationship - Theories are often more complex than this, incorporating multiple independent variables (a multivariate relationship) in a complex causal chain - An intervening variable is one that comes between the independent and dependent variables, in which income is modelled as an intervening variable between education (independent variable) and support for socialism (dependent variable) - A causal model might also include conditional variables, which strengthen the relationship between an independent variable and a dependent variable for some categories (subgroups) of the conditional variable, while weakening the relationship for others - Finally, a causal model might include reinforcing variables, which strengthen and magnify the relationship between an independent and a dependent variable Levels of Measurement: - These are categorical concepts: the variations within the concept indicate differences of kind, and cases are grouped into categories according to descriptive types - These are continuous concepts: the variations within the concept indicate differences of degree, the concept’s characteristics are sequentially connected, and categories are placed on a continuum, with cases assigned to categories according to their position of the continuum - The researcher must select a level of measurement that is appropriate to the concept, with lower levels of measurement having less precision between variable categories and higher levels of measurement having greater precision - Categorical concepts are operationalized into nominal variables, whose categories cannot be ordered or ranked - Religious affiliation and province of residence are examples of nominal variables, as are “yes-no” distinctions - Ordinal variables allow for the ordering of categories along a continuum, but without a precise distance-neither agree nor disagree-somewhat agree-strongly agree,” “poor-fair- good-excellent,” and the “A-B-C-D-F" grading system are all examples of ordinal rankings - Interval/ratio: Interval variables are those that can be ordered and for which the categories are separated by a standard unit - Ratio variables have the same qualities as interval variables, with the additional quality of an absolute zero (that is, a value of zero means that there are none of the variable’s values) Sep 12, 2024 - Reading and Writing Political Science Research – Lesson # 2: Topic 2 Overview: - What is a Research Report? - The Audience - Presenting your Work - Constructing your Argument - Components of a Research Report - Key terms What is a Research Report? - Research reports are the final stage of any research project, where you report your results to others. - A research report can be a class paper, a graduate thesis, a policy paper, a journal article, or a book. - The format of the research paper will be determined by the type of research and your stage in your research career, but they all have the same necessary parts. - A research report is not an afterthought; it is essential to the research process. The Audience: - Any research report has an audience. - A student paper – the course instructor - A graduate thesis – the supervisory committee - A policy report – policy makers - A professor’s research project – other researchers Know your audience. - What will your audience’s prior understanding of the topic be? - Why will they want to know the results of the research? - How long will you have to explain to them? - Will they understand complicate statistics or theoretical points? Be sure to match your presentation of the results to your audience: - Your professor wants to know if you understand course material. - A policy maker wants to know what the information suggests they should do. - A fellow researcher wants to know how your work supports a body of knowledge. What if you’ve got multiple audiences? - This is good – it happens all the time! - Sometimes, it means you produce different reports – an academic article, a policy brief, and an op-ed piece call all be different research reports for the same project. - Sometimes, it means that you put the technical parts of a report in an appendix, footnotes, and online supplements Presenting your Work: One way to report on research is with oral presentations: - These are also different from a written report – but they should contain the same basic information. - Again, consider your audience: what will they want to know? How long will you have to present? - Think carefully about visuals: how can they support your presentation? What can they communicate? - Keep it simple and remember, for an oral presentation, less is more. - What is the takeaway you want your audience to remember? Constructing your argument: - Remember – arguments are positions based on reasons. For political science, reasons should be based on empirical evidence. - If the evidence is of poor quality or does not support the argument, the argument is rejected. - Your goal in your research report is to convince your readers that your argument is well- supported with good quality evidence. (Of course, you can only do that if it’s true!) - A strong argument requires using the evidence to build your case for your position. - Reporting the evidence alone is insufficient – you need to craft an argument - You can’t ignore competing evidence – you must figure out either how to explain that it doesn’t matter, or how it fits in the bigger picture - Think of yourself as a lawyer. How would you ‘win the case’ for your findings? Components of a Research Report: - Every research report is tailored to its context...but they all have the same basic parts: - Abstract/Executive Summary - Introduction - Literature Review - Research Design - Presentation of Findings - Discussion - Conclusion - References Components of a Research Report – Abstract or Executive Summary: - Abstracts are used in academic reports, whereas executive summaries are more usually a part of non-academic reports. - The point is to summarize the entire paper briefly - This helps the reader understand the arguments they will be evaluating, as well as determine whether they should read the full text Components of a Research Report – Introduction: - The point of an introduction is to interest your readers and set them up to understand what is going to come. - It should not be identical to the abstract – your reader always read that - Your introduction should contain a clear thesis statement, which should summarize your argument - It should also explain why the topic is interesting and worth studying – the ‘so what’ question Components of a Research Report – Literature Review: - The literature review is essential to all academic research - Remember: Intersubjectivity! How can you know how your research relates to other research if you haven’t read it, and don’t tell your audience what it says? - Not all literature reviews have a heading that says “literature review” - Your literature review can be organized by theme or topic, and usually will have headings that describe themes - Non-academic research reports may not have literature review, or may have a much smaller one - What does a literature review need to do? - How does this project grow out of literature? - How is it built? How is it new? - What are the empirical definitions and measures that this project uses? - These are likely to build from prior literature –either borrowed from other sources, or developed as advances on prior work - How does the project fit into the current state of research? - Does it confirm or contradict existing knowledge? Does it change what we thought we knew? Components of a Research Report – Literature Review: - First, work to understand the scholarly conversation you are entering. - What is already known? - What questions are still open? - Are there different positions or schools? - Second, write a literature review that explains what you have learned about the state of the field. - How can you communicate what you have learned to others? - Building a literature review requires reading critically when you read the literature. - The goal of critical reading is to assess the argument. - To do this, you need to understand the structure of the argument and be able to evaluate the evidence. - The two-pass approach to reading may help you do this: - On the first pass, you skim the report, focusing on the structure of the argument and the nature of the evidence. - On the second pass, you evaluate the evidence in detail. - Questions to answer when evaluating a research report’s argument: - What exactly is the argument? - How does the author use the literature review to provide a context within social science research on the same or similar topics conducted to date? - Are the key terms clearly defined? - Is the analysis balanced? - Is there a clear connection between the evidence provided and the conclusions drawn? - Overall, do you feel that the argument is strongly, somewhat, or not at all supported by the evidence? Components of a Research Report – Research Design: - Remember: Empirical political science research is based on data that is collected according to an organized plan. For your readers to evaluate your work, they need to understand how you got that data, and how you interpreted it. - All research must be replicable and transmissible (Intersubjectivity) - Replicable: another person could conceivably attempt to do a similar study and draw similar conclusions - Transmissible: the methods can be explained (Think recipe!) - Your write up of your research design must meet these standards Components of a Research Report – Presentation of Findings: Doing a good job presenting your findings is all about making choices: - Which results will you report? - How will you report them? - How technical will you be? - What will your audience be most interested in? - What information is necessary to understand the core of the argument? Presentation of Findings: Even “bad results” can be worth reporting - Every research project starts with a research hypothesis, which is a particular relationship that the work wants to test. - A sample hypothesis: The more education people have, the more likely they are to vote. - Your research might find that this is true...or it might not. - This (“might not”) is called null findings. - Null findings are not bad. Instead, they help all researchers better understand the world. - If younger candidates don’t get young people out to vote what does? Components of a Research Report – Discussion: - The presentation of findings section is where you are technical; the discussion section is where you explain what the findings mean - This is the part of the report where your voice is strongest and your argument sinks or swims - It is also where you can make recommendations for action or suggestions for future research. What kind of recommendations you make depends on what the goals of your research are. Components of a Research Report – Conclusion: - Don’t just stop! - Remember the old journalistic adage: “Tell them what you’re going to tell them, and them tell them what you told them.” - This is your last chance to make an impression on your reader. What do you want them to carry away? - Some people will only read the conclusion. If they do so, will they understand what you were trying to do? Components of a Research Report – References: - Replication/Intersubjectivity: allowing readers to see where you got your information, build from that information to their own conclusions, and go further in their own work. - Academic honesty: properly representing where you got your ideas, rather than passing off other people’s work as your own - There are tons of software to help you do references. - Some of it will format citations correctly. - Some of it will help you keep track of your notes and citations for the things you read. - All of it is worth finding – ask your university library for help. Key Terms: - Components of a research report (listed on slide 12) - Grey literature - Null findings - Predatory journals - Replicable - Research hypothesis - Transmissible Chapter 3 and quiz 3 – Completed Chapter 4 and quiz 4 – 4.1 completed Topic 3: Conducting Research Ethically – Sept 17, 2024 - Basic Ethical principles - The meaning of informed consent - Why can the principle of informed consent be problematic? - The cost-benefit approach Basic ethical principles - There should be no deception involved in the research. - There should be no harm (physical, psychological or emotional) done to participants (confidentiality). - Participation should be voluntary (right to withdraw). - Participation should be based on informed consent. The meaning of informed consent Informed consent can be defined as “the idea the respondents in a research project fully understand the nature of the project and the extent of their participation and agree to participate based on these understandings. “(Berdahl and Roy, 2021) This definition raises 4 issues: - Competence - Voluntarism - Full information - Comprehension Why can the principle of informed consent be problematic? How much information is needed for consent to be “informed”? - A person concerned about his or her own welfare would need to know before making a decision. - But... - What if it is extremely important that participants do not know the true purpose of the study? - What if it is extremely important that participants not even know that they are being studied? The cost-benefit approach The cost-benefit approach involves weighing the potential contribution to knowledge and human welfare against the potential negative effects on the dignity and welfare of the participants. This approach can be problematic: - The ethical issues involved can be subtle, ambiguous, and debatable. - We are necessarily weighing predictable costs and benefits, but possible costs and benefits. - The process of balancing costs and benefits is necessarily subjective Key Terms: - Anonymity - Applied research - Basic research - Confidentiality - Informed consent - P-Hacking - Right to withdraw Chapter 4 Summary – Continued: - Concept qualities/ Many concepts of interest do not have a natural or set standard unit of distance between categories, and imposing such a scale onto such concepts can seem inauthentic. (Points in favour for ordinal variables) - Statistics. As we will explain in later chapters, level of measurement determines the type of statistical measures available for analysis - The higher the level of measurement, the more powerful the statistics available - Generalizability. Our aim in research is to form generalizations about groups of people, and that goal requires that we categorize individuals based on similar and different characteristics - Transformational opportunities. Once data are collected, it is possible to transform data from a higher to a lower level of measurement by grouping categories (for example, by grouping exact ages into age categories), but it is impossible to transform data from a lower to a higher level of measurement - Data quality. When data are collected from individuals (for example, in a survey), they may not know a precise value either (off the top of your head, what was your exact total pre-tax income for last year?) or may not be willing to share a precise value (our income example works here as well) Measurement Accuracy: - Measurement error is the difference between the true value and the measured (observed) value of a quantity - It consists of random error, which is naturally occurring, non-systematic error, and non- random error, also referred to as systematic error, which is error that results from faults in the measure - One of key considerations for measurement error is reliability, which is the extent to which the measurement of a quantity yields consistent results - Another essential consideration for measurement error is measurement validity, which refers to the extent to which the measurement validity, which refers to the extent to which the measurement of a conceptual definition Using Foundational Knowledge to Critically Assess Political Science Research: - Internal validity refers to the degree to which a study demonstrates a trustworthy assessment of causality; a study has high internal validity if it is able to satisfy all or most of the causal criteria, and internal validity if it is only able to satisfy a few of the causal criteria (we revisit internal validity, along with external validity, which is the extent to which the findings from the cases under examination may be used to make generalizations beyond the original study Research Foundations: Theory, Concepts, and Measures 1 – Sept 19, 2024 Overview – Chapter 4 (Topic 4.1 and 4.2): What is a theory? Theory building and theory testing What are concepts? Nominal (conceptional) definitions Operational definition What is a variable? Variables versus concepts What are indicators? Operationalization Correlation or Causation? Independent vs. dependent variables Criteria for causality What is a hypothesis? Why are hypotheses so important? Two hypotheses Converting a theory into a testable form Hypotheses and confounding variables Spurious, intervening, and conditional variables Rules and levels of measurement Nominal-level measurement Ordinal-level measurement Interval-level measurement Ratio-level measurement Validity versus reliability Systematic versus random errors Reducing measurement errors Formulating hypotheses and common errors What is a theory? - A theory is a simplified explanation of the world - A theory can... - Explain what happened - Predict - Explain differences between cases - Explain change over time - A theory explains relationships between concepts - A concept is not a specific example from the world - it is and abstract representation - Ex. Political participation, Democrat and Power - We use concepts to develop a hypothesis that allows us to test our theory. - Theory-based research asks questions that are driven by theories about how the world works. - Most basic research is theory-driven, because theories help us understand more about how the world works. - Applied research often proceeds from problems or needs in the world, but still applies theory in order to develop a stronger knowledge base. - A theory should be: - Parsimonious - As simple as possible - “fewest moving parts” - General - Can explain multiple events – not just a single occurrence - Falsifiable - Can be tested and proven wrong – something that cannot be proven wrong is a believed, not a theory Theory building and theory testing: Theory grows and develops through a balance between theory-building research and theory- testing research. Theory-Building Research: - Also called inductive or exploratory research - Use observations from the real world to build a theory - Qualitative research is well-suited for theory-building (but you can also use quantitative methods) Theory-Testing Research: - Also called deductive research - Develop a hypothesis from a theory and test it - Quantitative research is well-suited for theory-testing (but you can also use qualitative methods) - The point of having a theory is not to provide it, but to be able to make sense of the world using a framework, and then to test it. - If you test a theory, and the evidence supports it, then your argument is that we can have more confidence in the theory. - If you test a theory, and the evidence supports it in part but raises questions about other parts, then tour argument is that the theory should be modified - If you test a theory, and the evidence does not support it, then your argument is that we should have less confidence in it. - Theory guides political scientists in their work. What are concepts? Concept: a defined term that enables us to orgnaize and classift phenomena; an abstract idea that represents qualities in the world. (Bedahl and Roy 2021) - A concept is a unversal descriptive word that reders directly or indirectly to something that is observable. - Theories are made of concept – not actual events or phonomena. - A theory should talk about the relationship between a dependent and independent event - It should look to establish a causal mechanism, which is an explanation of how the concepts are related Nominal (conceptional) definitions: Concepts must have a nominal (conceptual) definition, which is specific and clear about what the concept is and is not - This is harder than it sounds! Is a country a deomcracy if... - It holds elections for at least one major political office? - Those elections are feww and fair? - Citizens have basic civil rights? - Citizens have the right to dissent? Political scientists have argued about all these things...and more! - How have other scholars defined the concept? - Of ten there will be competing definitions, and you have pick one. - Sometimes exiting definitions will not perfectly fit what you are trying to do, and you will need to add a new lawyer of analysis. - Sometimes you need to create an entirely new concept...but this is pretty rare. Operational definition: Once you have a concept, you have to figure out how to measure it (operational definition): Operational definition: a concrete, measurable version of the concept (indicators) A properly framed operational definition: - Adds precision to concepts - Makes propositions publicly testable This ensures that our knowledge claims are transmissible and makes replication possible (intersubjectivity) The process of moving from a nominal / conceptual definition (an abstraction) to a measure or a set of measures (concrete) that enables a researcher to empirically observe the concept is called operationalization What is a Variable? - Any property that varies (i.e. takes on different values) can potentially be a variable. - Empirically observable prosperities that take on different values. Variables vs. Concepts: Variables require more specificity than concepts. One concept may be represented by several different variables. Concept: Economic dependency – Reliance on a limited number of trading partners Reliance on a limited range of exports Extent of foreign ownership Socio-economic status – Income Occupation Education Political sophistication - Ability to use left-right terminology Level of information about politics In order to be of any use in research, variables must be: Exhaustive and Exclusive What are indicators? Indicators: observable prosperities that indicate which category of the concept is present or the extent to which the concept is present. “...the means by which we assign each individual case to the different values of the variable” (Berdahl and Roy 2021) Operationalization: Correlation or Causation? - You have probably heard “correlation is not causation.” But what does that mean? - Correlation is when two variables have a predictable relationship with each other. - Causality is when one event has an effect on another. - Often, we use correlation to look at possible causal relationships. But correlation alone cannot tell us if the relationship is causal or not. - A hypothesis will specify a relationship between two variables (a correlation) and which one is the cause and which one the effect (causality). - A causal mechanism is the reason, rooted in your original theory, that explains why a causal relationship makes sense. - Causal relationships are defined, in the scientific approach to politics, through the relationships between independent and dependent variables (hypothesis). - Independent variable (IV): the one causing the change - Dependent variable (DV): the one that is changed because of the other variable Interdependent vs. Dependent Variables: Cause (Independent Variable) ---> Effect (Dependent Variable) - Dependent variable (effect): Phenomenon that we want to explain - Independent variable (cause): Fact that is presumed to explain the dependent variable - Example: The higher a person’s interest in politics (IV), the more likely they are to vote (DV). Criteria for Causality: 1. Correlation – if there is no correlation between the two variables, then the independent variable does not cause the dependent variable. 2. Temperoral Order – the cause must come before the effect (this can be harder to prove than you think!) 3. Absence of Confounding (spurious) Variables – is something else actually the cause? 4. Plausible causal mechansim – does the theory you are proposing make sense? 5. Consistency – does it apply only in one case, or does it apply more generally? What is a Hypothesis? Why are Hypotheses so Important? - Hypothesis provides a bridge between theory and observation. - Hypotheses are essentially predictions of the form, if A, then B, that we set up to test the relationship between A and B. - Hypothesises enable us to derive specific empirical expectations (‘working hypothesis’) that can be tested against reality. - Hypotheses direct investigation. - Hyptheses provide pe-defined rational for relationships. If we have hyptohesized that A and B are related, we can have much more confidence in the observed relationship than if we had just happened upon t. - Hyptheses may be affected by the researcher’s own values and presdispositions, but they can be tested, and confirmed or disconfirmed, independetly or any normative concerns that may have motivated them. - Even when hypotheses are disconfirmed, they are usefu since they may suggest more fruitgul lines for future inquiry-and without hypotheses, we cannot tell positive and negative evidence. Two hypotheses: When yout test a hypothesis, you actually have two hypotheses: - The null hypothesis: the varibales have no relationship to each other (rejection is good!) - The research hypthesis: the relationship you actually want to test (the relationship you think exists) - The research hypothesis should include a clear statement of causality, and usually a direction of co-variation – positive, correlation meaning that both variables move in the dame direction or negative correlation meaning they move in opposite directions. Converting a theory into a testable form: Hypotheses and confounding varaibales: - Some hyothesis posit bivariate relationsips (relationships between two vairbales) while other posit multivariate relationships (relationsips between multiple varuables.) - Some types of varaiables that are not the IV/DV: - Spurious varuable: something the findluences both IV and DV, and creates a prolem for causality (you do not want these!) - Intervening varuable: something that comes between the IV and DV - Concditional varuable: something that influences the strength of the effort of the relationsip between IV and DV - That’s SIC! Sources of Spuriousness: To identify a potential (SS) source of spuriousness, ask yourself: 1. Is there a variable that might be a cause of both the IV and the DV? 2. Does that variable act directly on the DV and IV? Example: The more educated (IV) people are, the more they will support feminism (DV). - Positive relationship Intervening variables: Intervening varibales: variables that mediate the relationship between the IV and the DV. A intervening variable provides an explanation of why the IV affects the DV. The DV is related to the IV because the IV affects the intervening variable and the intervening variable, in turn, affects the DIV. ITo identify plausiable intervening, ask yourself why you think the IV would have a causal impact on the DV. Example: The higher people’s education, the more supportive of socialism they will be. Conditional variables: Conditional variables are barauables that literally condition the relaitonship between the IV and the DV by affecting: 1. The strength of the relaionship between the IV and the DV (i.e. how well do voluse of the IV predict values of the DV?) and 2. The form of the relationship between the IV and the DV (i.e. which values of the DV tend to be associated with which values of the IV?) Example: Individuals who support increased social spending will be more likely to vote for social democratic parties compared to those opposed to increased social spendings. Note: the focus is always on how the hypothesized relationship is affected by different values of the conditional variable. Rules and levels of Measurement: Categorical oncept: differences between values are differences in kind (expale: gender) - Categiical concepet lead to nominal variables which cannot be ordiered or ranked Continuous concept: differences between values are differences in amount (example: age) - Cintinous concepts produce vairbales that are ordinal (can be ranked from less to more), interval (there s a precise, measurable distance between socres), and ratio (an interval variable with a true zero) The level of measurement that can be achieved depends on: - The nature of the property being measured - The choice of data collection procedures The general rule is to aim for the highest possible level of measurement because higher levels of measurement enable us to perform more powerful and more varied tests. Nominal-level measurement: - Lowest level of measurement - It involves classifying a variable into two or more categories and then sorting our observations into the appropriate category. - Numerals simply serve to label the categories. - There is no hierarchy among categories and the categories cannot be related to one another numerically. Oridnal-level meausrement: - Involves classifying a varuable into a set of ordered categories and then sorting our observations into the appropriate category according to whether they have more less of the property being measured. - The categories stand in a hierarchial relationship to one another nd the numerals serve to indicate the order of the categories. - With oridinal-levele measurement, we can say only that one observation has more of the property than another. We cannot say how much more. Interval-level measurement: - Involves classifying a variable into a set of ordered categories that have an equal interval between them and then sorting our observations into the appropriate category according to how much of the property they possess. - There is a fixed and known interval (or distance) between each category and the numerals have quantitative meaning. They indiciate how much of the property each observation has. - We can say not only that one observation has more of the property than another, we can also say how much more. But we cannot say that one observation has twice as much of the property than another observation. Ratio-level measurement: - The only difference between ratio-level measurement and interval-level measurement is the presence of a non-arbitrary zero point. - A non-arbitrary zero point means that zero indicates the absence of the property being measured. - Now we can say that one observation has twice as much of the property as another observation. Validity versus reliability: Validity – are we measuring what we think we are measuring? Relaiability – does our measurement process assign values consistently? Measurement errors – are difference in the values assigned to bobservations that are attributable to flaws in the measurement process Measurement errors can be either systematic or random. Systematic versus random errors: Systematic errors occur when our indicator is picking up some other property, in addition to the property it is supposed to measure. Random errors are chance fluctuations in the measurement results that do not reflect ture differences in the property being measured. Random errrors make our measures unreliable and invalid Systematic errors are no threat to reliability precisely because they are systematic i.e. they consistently affect our meausrement results. But a reliable measure is not necessarily valid. - All measures have errors. (It is not possible to conduct a perfect measurement of anything.) - Measurement error is the difference between the true value of a variable (which you may not know!) and the measured value. - Measurement error = random error (caused by random happenstance) + non-random error (caused by problems with the measurement technique) - Random errors will average out over time; non-random errors lead to errors in a particular direction and will create basis in your data. Reducing measurement error: - How can you decrease measurement errors? - Think about reliability: does a measure get the same results each time you use it? Does it get the same results for everybody? - Think about measurement validity: does the measure match the cincept it is trying to measure? If there is a mismatch, how can you make it better? Reducing measurement error – assessing reliability: Assessing reliability is basically an empirical matter. The best way to achieve high reliability is to be aware of the sources of unrelaiability and to guard against them. Reducing measurement error – validity: - Types of validity to consider: - Face validity: does this make any sense on the surface? - Internal validity: does it satisfy the logical constraints of the study itself? - External validity: can the findings be used to understand other cases it did not examine? Formulating hypotheses and common errors: Recall Hypotheses can be arrived at either: - Inductively (by examining a set of data for patterns) - Deductively (by reasoning logically form a proposition) Choice depends on whether we are conducting exploratory research or explanatory research. Hypotheses: 1. State a relationship between two variables 2. Specify how the variables are related 3. Carry clear implications for testing When both variables are ordinal or interval/ratio, state how the values of the DV (dependent variable) change when the IV (Independent variable) changes: The higher people’s income, the more attention they will pay to politics. The lower the rate of inflation, the fewer political protests there will be. When the IV is ordinal or interval/ratio and the DV is nominal, state which category of the DV is more likely to occur when the IV changes: The more education people have, the more likely they are to vote. As income inequality increases, civil disorder is more likely to occur. When the IV is nominal and the DV is ordinal or interval/ratio, state which category of the IV will result in more of the DV: Women tend to be more supportive of increased funding to social programs than men. When both the IV and the DV are nominal, state which category of the DV is more likely to occur with which category of the IV: Men are more likely than women to run for elected office. Formulating hypotheses and common errors – review: 1. Error # 1: We need to have two variables in the hypothesis 2. Error # 2: We need to state how the two variables are related 3. Error # 3: (Incompletely specified). When the IV is categorical, the reference categories must always be included 4. Error # 4: (Improperly specified- Mose common error) The comparison must be made in terms of the IV not the DV 5. Error # 5: We must avoid normative statements – hypotheses must never contain words like ‘should’, ‘ought’, or ‘better than’. 6. Error # 6: Hypotheses must not contain proper names (because this limits generalizability) 7. Error # 7: Hypotheses must not be tautological – two variables that have different names but mean the same thing Populations of Study – Chapter 5 Summary - The group that we wish to generalize about is known as a population (another commonly used term in universe) - A population of study may be comprised of people (for example, Canadian citizens, political candidates, Supreme Court judges) or it may be comprised of things (for example, social media posts, political party platforms, Supreme Court decisions) - When a population is sufficiently small, or when researchers have sufficient resources, it is possible for researchers to do a census study, in which all members/units of the population are included in the study - The solution is to select a sample (or subset) of cases from the population of interest - Sampling-the process of drawing a sample of cases from a larger population-it utilized in all forms of data collection - We will discuss sampling issues specific to individual research designs in Part II; this chapter provides a foundation for those future discussions by exploring the logic of sampling and outlining a number of common sampling techniques Populations and Samples: - When doing so, three factors must be considered: the unit of analysis (e.g., will your study focus on individuals, political parties, municipal governments, etc.), the geographic location, and the reference period (time under consideration) (Statistics Canada, 2013) - When the scores of each member (or case) of the population are measured in numeric form, the resultant characteristic is known as a population parameter - When the scores of a sample are measured in numeric terms, this information is known as a sample statistic - The issue of accurately reflecting the population is important, because political science research ideally will be portable, meaning that the results from a study can be applied in some way to another context - In quantitative research, portable research is defined in terms of external validity, which is the extent to which the findings from the cases under examination may be used to generalize beyond the original study - In qualitative research, portable research is defined in terms of transferability, which is the extent to which researcher can export lessons drawn from the study to develop conclusions about another set of cases Representative Samples: - If we are to generalize about a population from a sample with confidence, as quantitative research often seeks to do, we must use a representative sample, that is, one that accurately represents the larger population from which it was drawn - A sampling frame is a list of all the units in the target population - If our target population is students at Canadian universities during the current academic year, our sampling frame would list all registered students The Importance of Sampling Frames: The Case of Literary Digest: Another problem with the survey was non-response bias, in which the individuals who respond and do not respond to an invitation to participate in research are different from one another in some important way Coverage bias occurs when the sampling frame fails to include some groups in the population One technique popular among telephone survey researchers is random-digit dialing, which involves the computer generation of telephone numbers; therefore, the sampling frame is all active members The second factor that influences the representees of a sample as the sample selection method, which is the manner by which cases in the population are selected for inclusion in the sample Probability sampling techniques are based on probability theory and allow researcher to use statistics (discussed in chapter 13) to test the representativeness of their sample Non- probability sampling techniques are not based on probability theory, and researchers are not able to use statistical analysis to make inferences from the sample to the larger population of study Probability sampling is commonly used in quantitative research, although many quantitative studies use non-probability sampling in the form of Internet opt-in panel surveys, in which participants make the decision to join the panel rather than being contacted through random selection, and experimental research The opt-in panel sample was also found to minimize social-desirability bias (a type of bias in which respondents alter their responses to appeal to the interviewer) while performing equally as well as the telephone sample when it came to estimating both political attitudes and vote choice The third factor that determines a sample’s representativeness is sample size, which is the number of cases included in the full sample Simple random sampling is the process by which every case in the population is listed and the sample is selected randomly from the list. (Computers simplify this process.) The population parameter we are interested in is the mean (or arithmetic average), which is equal to the sum of the individual scores divided by the total number of cases (6); hence, the mean number of pets owned is 2 (12/6) The range of values within which the population parameter is likely to fall is known as a confidence interval, a concept we discuss below and will return The difference between the sample statisitic (the estimated value) and the population parameter (the actual value) is referred to as sampling error Sample Size: - To determine the appropriate sample size, we need to consider three factors: the homogeneity-how similar a population is with respect to salient characteristics-of the sample, the number of variables under study, and the desired degree of accuracy - Before conducting an analysis, researchers can state the margin of error-the amount of sampling error, expressed error, expressed as a percentage-they are willing to accept - Typically (although not always), researchers employ 95 percent confidence level, meaning that there is a 95 percent chance that the population parameter falls within the confidence interval, and 5 percent chance that population parameter falls outside of the confidence interval Statistical Power and Sample Size: - Statistical power refers to the probability that findings from a sample will allow researchers to correctly identify similar relationships in the larger population from which the sample was drawn - Effect size refers to the strength of the relationship between variables Conducting Probability Samples - The random selection means that we can identify margins of error, and from these make generalizations and draw conclusions about the general population - Occasionally, researchers will use systematic random sampling - Stratified random sampling involves breaking the population into mutually exclusive subgroups, or strata, and then randomly sampling each group - Assuming a proportionate stratified random sampling approach, we would generate a final sample that reflected the proportion of each set of students in the overall population - Disproportionate stratified random sampling is used if a particular group of interest is small; by sampling a larger proportion of that subgroup, the researcher ensures that group has numbers large enough to produce meaningful statistics - To reconstruct a representative national sample, it is necessary to assign design weights within the dataset - Cluster sampling is the processor dividing the population into several subgroups, known as clusters, and then randomly selecting clusters within which to randomly sample Non-probability Sampling: - Non-probability sampling can be accidental or purposive - In an accidental sample, also known as a convenience sample, researchers gather data from individuals they “accidentally” encounter or who are convenient - A related, similarity biased form of sampling is any sample that involves self-selection (that is, respondents themselves select whether to be part of the sample), such a website or social media opinion polls - In such samples, the participants are limited to those who opt in to the study, and it is possible that they are unrepresentative of the larger population due to self-selection bias - Purposive sampling (also known as judgement sampling) involves researcher selection of specific cases; the researcher uses his or her judgment to select cases that will provide the greatest amount of information - (The former example is known as a most similar systems design and the latter a most different systems design) - Snowball sampling (also known as network sampling) is often employed to study social networks or hard-to-reach populations - When either accidental or purposive sampling is combined with stratification, the result is known as quota sampling Sep 26, 2024 – Populations of Study - Populations and samples - Logic of estimation - Applying the logic and terminology of inferential statistics - Probability versus non-probability sampling - Probability sampling and probability theory - Sample size - Margin of error - Confidence interval and confidence level - Law of diminishing return - Conducting probability samples - Conducting non-probability samples Population and samples: Problem: The populations we wish to study are almost always so large that we are unable to gather information from every case. Solution: We choose a sample – a carefully chosen subset of the population – and use information from the cases in the sample to generalize to the population. Logic of estimation: In estimation procedures, statistics calculated from random samples are used to estimate the value of population parameters. Parameters and statistics: - Statistics are mathematical characteristics of samples. - Parameters are mathematical characteristics of populations. - Statistics are used to estimate parameters. Statistic --> Parameter Example: - You want to know what % of students at a large university work during the semester. - Draw a sample of 500 from a list of all students at the university (N = 20,000). - Assume the list is available from the Registrar. Applying logic and terminology of inferential statistics: - After questioning each of these 500 students (of the 20,000 students), you find that 368 (74%) work during the semester. - Based on the example above, identify each of the following: - Population = 20,000 students. - Sample = The 500 students selected and interviewed. - Statistic = 74% (% of sample that held a job during the semester). - Parameter = % of all students in the population who hold a job. - How do we know how close the statistics are to the actual population parameter? - Probability (random) sampling and probability theory (math can fix this!) Probability (or random) sampling: every member of the population has a known and non-zero probability of being included in the sample. Non-probability (or non-random) sampling: no way of specifiying the probability of inclusion and there is no assurance that every member of the population has at least some probability inclusion Probability sampling has two crucial advantages: 1. Avoids conscious or unconscious bias on the researcher’s part. 2. Allows us to use inferential statistics. Despite these advantages, non-probability sampling is used when: 1. Convenience and economy outweigh the risk of having an unrepresentative sample. 2. No population list or surrogate population list is available. 3. Qualitative research / small n/ not intending to generalize to larger population. Probability sampling and probability theory: - The probability of an event happening is PA=r/n Where p(A) means “the probability of A happening” r means the number of “favorable outcomes” n means the number of total outcomes So: - The probability of getting a size when you roll one die is P(A)=1/6 = 0.17 - The probability of getting an Ace when you draw one card from a full deck is P(A)=4/52=1/13 = 0.08 - Sampling error is the difference between the population parameter and the sample statistic for a given sample. - Remember: You probably do not know the population parameter! - In this example, if our sample static is 1.5, the sampling error is 0.5 - the difference between the actual population parameter and our sample statistic. Sample static (mean) = 1.5 Population parameter (mean) = 2.0 Sampling error = 0.5 Sample Size: - Bigger samples mean less sampling error - Imagine asking 10 students in PO217 what they think of the course versus 100. Which one is likely to get closer to the population parameter? - You also need a bigger sample size if: a. Your sample is very heterogenous (population in all very different from each other) b. You are using a lot of variables in your study c. You want to get the most accuracy possible from your result - But bigger samples = bigger $ Margin of error: - Margin of Error: the amount of sampling error you are willing to accept - If your sample statistic is 45% and your margin of error is 5%, then you can say that the population parameter is between 40% and 50% - This (CI = 45% +/-5%) is called your confidence interval (in that you have confidence that the population parameter is in there). - Confidence interval: the sample statistic plus or minus the margin of error 40 – 45 - 50 Complex Level = 95% Margin of Error = +/- 5% Sample = 384 35 – 45 - 55 Complex Level = 99% Margin of Error = +/- 10% Sample = 384 (Should be changed) Confidence interval and confidence level: - There is always a chance that the population parameter does not fall within the confidence interval. How likely is this? That is determined by the confidence level, the probability that the sample statistic is an accurate estimate of the population parameter. - Typically, we want to be 95% sure we have captured the population parameter (e.g. “19 times out of 20,” meaning that 19 times out of 20 (95 per cent) the population parameter will fall within confidence interval). - Sample size is based on how confident we want to be; the more confident, the larger the sample required. But note the “law of diminishing returns”: Law of diminishing returns: Conducting Probability Samples: Simple random samples: Simple random sampling gives every member of the population (sampling frame) equal probability of inclusion and gives every possible combination (of the desired sample size) of members of the population and equal probability of inclusion. Disadvantages: 1. Extreme samples 2. Tedious and time-consuming (although less so with the availability of electronic population lists). Part 1: Systematic random sampling involves dividing the total population size by the desired sample size to yield the sampling interval (which conventionally denotes ‘k’). Then, beginning with a randomly selected person from among the first k people, the researcher selects every kth person. Example: Population size = 10,000 Desired sample size = 500 k = 10,000/500 = 20 The researcher would randomly select one person from among the first 20-say, the 14 th person—and then select every 20 th person (14, 34, 54, 74, etc.) Provided the first person is selected randomly, there is a priori no restriction on the probability of inclusion. Part II: Advantages: - Less cumbersome than simple random sampling-only one random number is required and thereafter it is simply a matter of counting off every k th person. - Reduces the risk of extreme samples since only combinations of people k people apart have an equal probability of inclusion. Disadvantages: - Can produce extreme samples if there is cyclical order in the population list and this order coincides with the sample's interval. Proportionate stratified random samples I: Proportionate stratified random sampling: Ensure that key groups within the population are represented in the correct proportion. The stratification variables must be: - Relevant to the phenomenon to be explained - Operationalizable –this means that we require information about the value of each person in the population on the stratification variable(s) before conducting our study. Advantages: - Avoids extreme samples for the characteristics that are used to stratify the population (e.g. everyone from Ontario) - Increased the level of accuracy for a given total sample size OR achieves the same accuracy at lower cost. Disadvantages: - Can only stratify on a few selected variables - It may not be possible to stratify theoretically crucial variables because we do not know their values ahead of time (e.g., income, education). Disproportionate stratified random samples: Dipropionate stratified random sampling is the same as proportionate stratified random sampling except that the research deliberately over-samples some strata and/or under-samples others. This is done for analytical reasons: - To facilitate statistical analysis by having a equal number of cases in the different categories of the IV. - To ensure sufficient cases for meaningful analysis where a stratum is small but substantively or theoretically important. Cluster random sampling: Cluster random sampling is used when no population lists available (e.g. all university students in Canada, all eligible voter, etc.). Sampling proceeds in stages. Advantages: - You do not need to complete the population list. - Reduces costs in sampling a geographically scattered population by concentrating interviews within selected localities. Disadvantages: - Increases the risk of sampling error because each stage has its associated risk of sampling error. Conducting Non-probability Samples: Conducting samples (accidental): Select whatever people happen to be conveniently available., the first 100 people agreed to be interviewed; students in an introductory methods class. Volunteer samples: People self-select to participate in study. Purposive samples (judgmental) sampling: The researcher uses his or her judgement and knowledge of the target population to select the sample, purposively trying to obtain a sample that appears to be representative. Snowball samples: Start with sell group of participants who fit study criteria and ask them to help identify others who may fit criteria. Quota sampling: Select a sample that represents a microcosm of the target population. Quiz Review – Oct 1, 2024 Chapter 4 – Operationalization: Example: Socio-Economic Status: The material status owned by a person based on their class status. Converting a theory into a testable form: Example: What is an imperical question? Suggested hypotheses that should be tested through research/ Inductive research: You observed data and the data in research is used to figure out why. Deductive research: Finding a question and reason; testing the research data and forming a conclusion Controlled variable: Change in the Independent Variable causes a change in the dependent variable. A source of spuriousness directly impacts the IV and DV. Generation is driving the data. Intervening variable: Only affects the dependent variable. Example: Is education leading to more income or is income resulting in more support for socialism? Conditional variables: - Strengthening from one side of the population while the other side becomes weak; both include IV and DV. Exhaustive and Exclusive: - All categories apply to this formula - Exclusive when the question is only single option based while exhaustive have multiple options to choose from - All variables should follow this format How can you identify the measurement of variables in hypothesis? Levels of measurement: N: Nominal - Least amount of measurement; has the least amount of information – whether if they fit into the same groups O: Ordinal – Forms of observed data is organized into different levels of category I: Interval – Can be compared but can’t compare it with other types of sources (Highest level of measurement) R: Ratios have none of the properties in the level of proprieties – level of flexibility (Highest level of measurement) Validity: - Are you getting the most valid and accurate measurements. Reliability: - Getting the same measurement repeatedly; it is not a valid measurement because it is not as accurate Systematic Error: Reliable but not valid data Random Error: Gives different values each time not reliable or valid Population Interval: Confront Interval and Confront level – Covered in chapter 5

Use Quizgecko on...
Browser
Browser