Summary ISSR (2) PDF
Document Details
Uploaded by GraciousSard9540
Erasmus Universiteit Rotterdam
Félix Karl
Tags
Summary
This document is a summary of Introduction to Social Science Research, outlining key concepts like agreement reality, epistemology, methodology, and causal/probabilistic reasoning. It also discusses sources of knowledge (tradition and authority), common errors in causal inquiries, and different views of reality. The document also introduces social theories and paradigms, exploring their role in research.
Full Transcript
lOMoARcPSD|16481989 Final Summary ISSR Introduction to Social Science Research (Erasmus Universiteit Rotterdam) StudeerSnel wordt niet gesponsord of ondersteund door een hogeschool of universiteit Gedownload door Félix Karl (feli417...
lOMoARcPSD|16481989 Final Summary ISSR Introduction to Social Science Research (Erasmus Universiteit Rotterdam) StudeerSnel wordt niet gesponsord of ondersteund door een hogeschool of universiteit Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Introduction to Social Science Research – CM1002 Chapter 1: Beginning Principles ss Most of what we know is a matter of agreement and belief. Little of it is based on personal experience and discovery. The basis of knowledge is agreement. You can also know things through direct experience and observation. Scientists have certain criteria that must be met before they’ll accept the reality of something they haven’t personally experienced. An assertion must have both logical and empirical support: it must make sense and it must not contradict actual observation. Agreement reality = those things we ‘know’ as part and parcel of the culture we share with those around us. Epistemology = The science of knowing; systems of knowledge. Methodology = The science of finding out; procedures for scientific investigation. W Causal reasoning = we generally recognize the future circumstances are b somehow caused or conditioned by present ones. Probabilistic reasoning = effects occur more often when the causes occur than when the causes are absent – but not always (denk aan leren voor toets hard leren zorgt meestal voor hoog punt, maar niet altijd) Human inquiry aims at answering both what and why questions, and we pursue these goals by observing and figuring out. Two important sources of our second-hand knowledge: 1. Tradition Each of us inherits a culture. We learn from others. Tradition offers some advantages to human inquiry: knowledge is cumulative, and an inherited body of knowledge is the jumping-off point for developing more of it. Tradition has some disadvantages to human inquiry; most of us rarely think of seeking a different understanding of something we know to be true. 2. Authority Authority offers some advantages to human inquiry: we do well to trust in the judgement of the person who has special training. Authority has some disadvantages to human inquiry: when we depend on the authority of experts speaking outside their realm of expertise. If we can understand why things are related to one another, why certain regular patterns occur, we can predict even better than if we simply observe and remember those patterns. Despite the power of tradition, new knowledge appears every day. Acceptance of new acquisitions depends on the status of the discoverer. Tradition and authority can both assist and hinder human inquiry. Common errors by causal inquiries and the ways science guards against those errors: = solution - Inaccurate observations you must make observation more deliberate. - Overgeneralization you must seek a sufficiently large sample of observations. Replication = repeating an experiment to expose or reduce error. - Selective observation (you have concluded that a particular pattern exists and have developed a general understanding of why and you’ll tend to focus on future events and situations that fit the pattern and not see the ones who don’t) select a number of persons who you’ll interview or find deviant cases - Illogical Reasoning (when you think that your found exception is the rule, or when something bad happens, it is more likely that something good will happen) find people who make sure you stay honest. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Naïve realism = the way most of us operate in our daily lives, you don’t really think about how things work Views of reality: - Premodern view: This view of reality has guided most of human history. Our early ancestors assumed that they saw things as they really were. This assumption was fundamental. - Modern view: This view accepts diversity in ways of thinking. - Postmodern view: neither the spirits nor the dandelion exists. All that’s real are the images we get through our points of view. This is a critical dilemma for scientists. A scientific understanding of the world must make sense and correspond with what we observe. Scientific enterprise: theory, data collection and data analysis. Theory = a systematic explanation for the observations that relate to a particular aspect of life: juvenile delinquency, for example, or perhaps social stratification or political revolution. Social scientists often study motivations and actions that affect individuals, they seldom study the individual per se. Social science theories try to explain why aggregated patterns of behaviour are so regular even when the individuals participating in them may change over time. They try to understand the systems in which people operate. Variable = a logical set of attributes. The variable sex Attribute = a characteristic of a person or a thing. The variable sex is made up of the attributes male and female. Independent variable: a variable with values that are not problematic in an analysis but are taken as simple given. An independent variable is presumed to cause or determine a dependent variable. (is the cause) example: hours of studying Dependent variable: a variable assumed to depend on or be caused by the independent variable (is the effect) example: grade Social research is a vehicle for exploring something. Dialects of social research: 1. Idiographic = an approach to explanation in which we seek to exhaust the idiosyncratic (=individual) causes of a particular condition or event.( more space for individual cases) Nomothetic = an approach to explanation in which we seek to identify a few causal factors that generally impact a class of conditions or events. (explain patterns) 2. Induction = the logical model in which general principles are developed from specific observations. (observations->theory) Deduction = the logical model in which specific expectations of hypotheses are developed on the basis of general principles. (theory->observations) 3. Quantitative and Qualitative data = the distinction between numerical and non-numerical data. Quantification often makes our observations more explicit and it opens the possibility of statistical analysis (ex. Surveys). Qualitative data are richer in meaning and detail (ex. In-depth interview). The qualitative approach seems more aligned with idiographic explanations, whereas nomothetic explanations are more easily achieved through quantification. 4. Pure and Applied research = understanding (gaining knowledge for the sake of it) and application of the knowledge. Main point’s chapter 1: - The subject of this book is how we find out about social reality Looking for reality - Much of what we know, we know by agreement rather than by experience. Scientists accept an agreement reality but have special standards for doing so - The science of knowing is called epistemology: the science of finding out is called methodology - Inquiry is a natural human activity. Much of ordinary human inquiry seeks to explain events and predict future events - When we understand through direct experience, we make observations and seek patterns or regularities in what we observe - Two important sources of agreed-on knowledge are tradition and authority. However, these useful sources of knowledge can also lead us astray - Whereas we often observe inaccurately in day-to-day inquiry, researchers seek to avoid such errors by making observation a careful and deliberate activity - We sometimes jump to general conclusions on the basis of only a few observations, so scientists seek to avoid overgeneralization by committing to a sufficient number of observations and by replicating studies Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 - In everyday life we sometimes reason illogically. Researchers seek to avoid illogical reasoning by being careful and deliberate in their reasoning as in their observations. Moreover the public nature of science means that others can always challenge faulty reasoning - Three views of reality are premodern, moderns and postmodern views. In the postmodern view there is no objective reality independent of our subjective experiences. Different philosophical views suggest a range of possibilities for scientific research The foundations of social science - Social theory attempts to discuss and explain what is, not what should be. Theory should not be confused with philosophy or belief. - Social science looks for regularities in social life - Social scientist are interested in explaining human aggregates, not individuals - Theories are written in the language of variables - A variable is a logical set of attributes. An attribute is a characteristic, such as male or female. - In causal explanations, the presumed cause is the independent variable, and the affected variable is the depended variable - Social research has three main purposes: o Exploring o Describing o Explaining social phenomena - Ethics plays a key role in the practice of social research Some dialectics of social research - Whereas idiographic explanations seek to present a full understanding of specific cases, nomothetic explanations seek to present a generalized account of many cases. (individual vs patterns) - Inductive theories reason from specific observations to general patterns. Deductive theories start from general statements and predict specific observations - Quantitative data are numerical, qualitative data are not. Both types of data are useful for different research purposes - Both pure and applied research are valid and vital parts of the social research enterprise (theory, data collection, data analysis) Chapter 2: Research and Theory Paradigms = a model or framework for observation and understanding, which shapes both what we see and how we understand it. The conflict paradigm causes us to see social behaviour one way, the interactionist paradigm causes us to see it differently. Paradigms don’t explain anything, but they provide logical frameworks within which theories are created. Theory functions three ways in research: 1. It prevents our being taken in by flukes (toevallig heden) 2. Theories make sense of observed patterns in ways that can suggest other possibilities 3. Theories can shape and direct research efforts, pointing toward likely discoveries through empirical observation. Benefits of using a paradigm: - We are better able to understand the seemingly bizarre views and actions of others who are operating from a different paradigm - At times we can profit from stepping outside our paradigm, we can see new ways of seeing and explaining things. Social science paradigms represent a variety of views, each of which offers insights the others lack while ignoring aspects of social life that the others reveal. Macrotheory = a theory aimed at understanding the big picture of institutions, whole societies, and the interactions among societies. Microtheory = a theory aimed at understanding social life at the level of individuals and their interactions. Mesotheory = studying organizations, communities and perhaps social categories such as gender. Karl Max: suggested that social behaviour could best be seen as the process of conflict: the attempt to dominate others and to avoid being dominated = conflict paradigm. Symbolic interactionism = most interactions revolved around individuals’ reaching a common understanding through language and other symbolic systems. Ethnomethodology = people are continuously trying to make sense of the life they experience. In a way, everyone is acting like a social scientist. Structural functionalism grows out of a notion introduced by Comte and others: a social entity, such as an organization or a whole society, can be viewed as an organism. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Feminist paradigms = attention to aspects of social life that other paradigms do not reveal. reveal treatment of women or the experience of oppression but often point to limitations in how other aspects of social life are examined and understood. Feminist standpoint theory = the idea that women have knowledge about their status and experience that is not available to man. Interest convergence = the thesis that majority group members will only support the interests of minorities when those actions also support the interests of the majority group. Critical race theory: the word critical in the name of a paradigm or theory likely refer to a non-traditional view, one that may be at odds with the prevailing paradigms of an academic discipline or with the mainstream structure of society. -Our subjectivity is individual; our search for objectivity is social (agreement reality). Critical realism = a paradigm that holds that things are real insofar as they produce effects. There are three main elements in the traditional model of science, typically presented in the order in which they are implemented: Theory, operationalization and observation. Theory: scientists begin with a theory, from which they derive a hypothesis that they can test. Hypothesis = A specified testable expectation about empirical reality that follows from a more general proposition Operationalization: To test any hypothesis we must specify the meanings of all variables involved in it. specify how we’ll measure the variables we have defined. -Operationalization = one step beyond conceptualization. Operationalization is the process of developing operational definitions, or specifying the exact operations involved in measuring a variable. Operational definition = the concrete and specific definition of something in terms of the operations by which observations are to be categorized. Observation: looking at the world and making measurements of what is seen. Null hypothesis = In connection with hypothesis testing and tests of statistical significance, that hypothesis that suggest there is no relationship among the variables under study. Theory and research can be accomplished both inductively and deductively. Deductive Method Inductive Method 1. Hypothesis 1. Observations 2. Observations 2. Finding a pattern 3. Accept or reject hypothesis 3. Tentative conclusion Both deduction and induction are legitimate and valuable approaches to understanding. Deduction begins with an expected pattern that is tested against observations whereas induction begins with observations and seeks to find a pattern within term. Deductive Theory construction: - First pick a topic. - Make an inventory of what is known or thought about it Constructing your Theory: - Specify the topic - Specify the range of phenomena your theory addresses. - Identify and specify your major concepts and variables - Find out of what is known about the relationships among those variables - Reason logically from those propositions to the specific topic you’re examining. Example deductive theory: Guillermina Jasso’s theory of distributive justice illustrates how formal reasoning Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 can lead to a variety of theoretical expectations that can be tested by observations. There are 2 important elements in science: 1. Logical integrity 2. Empirical verification Both are essential to scientific inquiry and discovery. Logical alone is not enough, but observation and collecting empirical facts does not provide understanding. Observation, however, can be the springboard for the construction of a social scientific theory. Inductive theory construction: - Observing aspects of social life - Seeking to discover patterns that may point to relatively universal principles. Example inductive theory: David Takeuchi’s (1974) analysis of the data gathered from University of Hawaii students about why some students smoke marijuana and others didn’t show that collecting observations can lead to generalizations and an explanatory theory. Deductive method: research is used to test theories. Inductive method: theories are developed from the analysis of research data. -There is no simple recipe for conducting social science research. Science depends on 2 categories: logic and observation. No matter how practical and/or idealistic your aims, a theoretical understanding of the terrain may spell the difference between success and failure. Ethics: The choice of a particular paradigm to organize your research relations will make a big difference. Choosing a theoretical orientation for the purpose of encouraging a particular conclusion, would be regarded as unethical as a general matter. However, when researchers intend to bring about social change through their work, they will likely choose a theoretical orientation appropriate to that intention. The danger lies in the bias this might cause in your research. 2 factors counter this potential bias: 1. Social science research techniques, the various methods of observations and analysis, place a damper on our simply seeing what we expect. 2. The collective nature of social research offers protection Several researchers studying the same phenomenon, perhaps using different paradigms, theories and methods the risk of biased research findings is further reduced. Main point’s chapter 2: - Theories seek to provide logical explanations Some social science paradigms - A paradigm is a fundamental model or scheme that organizes our view of something - Social scientists use a variety of paradigms to organize how they understand and inquire into social life - A distinction between types of theories that cuts across various paradigms is macro theory (about large-scale features of society) versus microtheory (about smaller units or features of society) - The positivistic paradigm assumes we can scientifically discover the rules governing social life - The conflict paradigm focuses on the attempt of one person or group to dominate others and to avoid being dominated - The symbolic interactionist paradigm examines how shared meanings and social patterns are developed in the course of social interactions - Ethnomethodology focuses on the ways people make sense out of life in the process of living it, as though each were a researcher engaged in an inquiry - The structural functionalist paradigm seeks to discover what functions the many elements of society perform for the whole system Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 - Feminist paradigms, in addition to drawing attention to the oppression of women in most societies, highlight how previous images of social reality have often come from and reinforced the experiences of men - Like feminist paradigms, critical race theory both examines the disadvantaged position of a social group and offers a different vantage point from which to view and understand society. - Some contemporary theorists and researchers have challenged the long-standing belief in an objective reality that abides by rational rules; they point out that it is possible to agree on an ‘intersubjective’ reality. Two logical systems revisited - In the traditional image of science, scientists proceed from theory to operationalization to observation. But this image is not an accurate picture of how scientific research is actually done - Social science theory and research are linked through two logical methods: deduction involves the derivation of expectations or hypotheses from theories. Induction involves the development of generalizations from specific observations. - Science is a process involving an alternation of deduction and induction Deductive theory construction - Guillermina Jasso’s theory of distributive justice illustrates how formal reasoning can lead to a variety of theoretical expectations that can be tested by observation Inductive theory construction - David Takeuchi’s study of factors influencing marijuana smoking among University of Hawaii students illustrates how collecting observations can lead to generalizations and an explanatory theory The links between Theory and research - In practice, there are many possible links between theory and research and many ways of going about social inquiry - Using theories to understand how society works is key to offering practical solutions to society’s problems The importance of theory in the real world - No matter what a researcher’s aims are in conducting social research, a theoretical understanding of his or her subject may spell the difference between success and failure. - If one wants to change society, one needs to understand the logic of how it operates Research ethics and theory - Researchers must guard against letting their choice of theory or paradigms bias their research results - The collective nature of social research offers protection against biased research findings Chapter 4: structuring a research project The three most common and useful purposes are: - Exploration explore a topic, done for three purposes o To satisfy the researcher’s curiosity and desire for better understanding o To test the feasibility of undertaking a more extensive study o To develop the methods to be employed in any subsequent study These studies are quite valuable in social science research, for new ground or new insights into a topic for research. The shortcoming is that they seldom provide satisfactory answers to research questions - Description want to describe situations and events. They describe what they observe. - Explanation the attempt to develop an initial, rough understanding of some phenomenon. Nomothetic model= trying to find a few factors (independent variables causes) that can account for many of the variations in a given phenomenon. probabilistic and usually incomplete. Idiographic model = seek a complete, in-depth understanding of a single case. = relatively complete. Three main criteria for nomothetic causal relationships: 1. The variables must be correlated correlation = an empirical relationship between two variables in such that changes in one are associated with changes in the other or particular attributes of one variable are associated with particular attributes of the other. 2. Time order: the cause takes place before the effect 3. The variables are non spurious the effect can’t be explained in terms of some third variable. Spurious relationship = a coincidental statistical correlation between two variables, shown to be caused by some third variable. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 The nomothetic model of causal analysis lends itself to hypothesis testing. You have to specify the variables that are causally related and then specify the manner in which you will measure them. -Hypothesizing = two variables will be correlated with each other. -Most of the time there are more causations to a problem. There are always exceptional cases to the causal relationship. Causal relationships can even be true if they do not apply in a majority of cases. Mere association, or correlation, does not in itself establish causation A necessary cause represents a condition that must be present for the effect to follow (you have to take college courses in order to get a degree) A sufficient cause, represents a condition that guarantees the effect in question, there are also other ways to get to the condition. (skipping an exam guarantees failure, but you can fail it other ways too) Units of analysis = the what or whom being studied. In social science research, the most typical units of analysis are individual people. -Individual human beings are perhaps the most typical units of analysis for social research. We tend to describe and explain social groups and interactions by aggregating and manipulating the descriptions of individuals. Any type of individual can be the unit of analysis for social research. -Social groups can also be units of analysis in social research. That is, we may be interested in characteristics that belong to one group, considered as a single entity. Formal social organizations can also be the units of analysis in social research. Sometimes social interactions are the relevant units of analysis. You can study what goes on between them. Social artefact (can also be unit of analysis) = any product of social beings or their behaviour, can be a unit of analysis. Each social object implies a set of all objects of the same class. The easiest way to identify the unit of analysis is to examine a statement regarding the variables under study. There are two types of faulty reasoning about units of analysis: - The ecological fallacy is about something larger than individuals. Ecological fallacy = erroneously basing conclusions about individuals solely on the observation of a group. The ecological fallacy deals with something else altogether, confusing units of analysis in such a way that we base conclusions about individuals solely on the observation of groups. - Reductionism = a fault of some researchers: a strict limitation of the kinds of concepts to be considered relevant to the phenomenon under study. Reductionism of any type tends to suggest that particular units of analysis or variables are more relevant than others. Reductionism can occur when we use inappropriate units of analysis. Researchers have two principal options for dealing with the issue of time in the design of their research: cross-sectional studies and longitudinal studies. - Cross-sectional studies: a study based on observations representing a single point in time. an inherent problem: their conclusions are based on observations made at only one time, typically they aim at understanding causal processes that occur over time. - Longitudinal research: a study design involving data collected at different points in time often the best way to study changes over time o Trend studies = a type of longitudinal study in which a given characteristic of a population is monitored over time. o Cohort studies = a study in which some specific subpopulation, or cohort, is studied over time, although data may be collected from different members in each set of observations. o Panel studies = a type of longitudinal study in which data are collected from the same set of people (the sample or panel at several points in time.) Social research follows many paths Possible beginning points for a line of research: interests, ideas and theories. Then you have to choose the research method. You also have to specify the meaning of the concepts and variables to be studied (conceptualization). And you have to determine the population and sampling and you must look at: how will we actually measure the variables under study? (operationalization). Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 After all this you are going to observe. Then you are going to process the data and after that analyse these and draw a conclusion. After that you report the results and assess their implications. Elements of a Research Proposal: - Problem or objective: what exactly do you want to study? - Literature review: what have others said about this topic? What theories are there? - Subjects for study: whom or what will you study in order to collect data? the specific research method that you will use will further specify the matter. - Measurement: what are the key variables in your study? - Data-collection methods: how will you actually collect data? - Analysis: indicates the kind of analysis you plan to conduct. Spell out the purpose and logic of your analysis. - Schedule: providing a schedule for the various stages of research is often appropriate. - Budget: the money that you need for your research. - Institutional review board: depending on the nature of your research design, you may need to submit your proposal to the campus institutional review board for approval to insure the protection of human subjects. Main point’s chapter 4: - Any research design requires researchers to specify as clearly as possible what they want to find out and then determine the best way to do it. Three purposes of Research - The principal purposes of social research include exploration, description and explanation. Research studies often combine more than one purpose. - Exploration is the attempt to develop an initial, rough understanding of some phenomenon. - Description is the precise measurement and reporting of the characteristics of some population or phenomenon under study. - Explanation is the discovery and reporting of relationships among different aspects of the phenomenon under study. Descriptive studies answer the question ‘what’s so?’ ; explanatory ones answer the question: ‘why?’ The logic of nomothetic explanation - Both idiographic and nomothetic models of explanations rest on the idea of causation. The t model aims at a complete understanding of a particular phenomenon, using all relevant causal factors. The nomothetic model aims at a general understanding – not necessarily complete- of a class of phenomena, using a small number of relevant causal factors. - There are three basic criteria for establishing causation in nomothetic analysis: 1. The variables must be empirically associated, or correlated. 2. The causal variable must occur earlier in time than the variable it is said to affect, 3. The observed effect cannot be explained as the effect of a different variable Necessary and sufficient causes - Mere association, or correlation, does not in itself establish causation. A spurious causal relationship is an association that in reality is caused by one or more other variables Units of Analysis - Units of analysis are the people or things whose characteristics social researchers observe, describe and explain. Typically, the unit of analysis in social research is the individual person, but it may also be a social group, a formal organization, a social interaction, a social artefact, or another phenomenon such as lifestyles. - The ecological fallacy involves applying conclusions drawn from the analysis of groups to individuals. - Reductionism is the attempt to understand a complex phenomenon in terms of a narrow set of concepts, such as attempting to explain the American Revolution solely in terms of economics. The Time dimension - The research of social processes that occur over time presents challenges that can be addressed through cross-sectional studies or longitudinal studies. - Cross-sectional studies are based on observations made at one time. Although conclusions drawn from such studies are limited by this characteristic, researchers can sometimes use such studies to make inferences about processes that occur over time. - In longitudinal studies, observations are made at many times. Such observations may be made of samples drawn from general populations, samples drawn from more specific subpopulations, or the same sample of people each time. How to design a research project - Research design starts with an initial interest, idea, or theoretical expectation and proceeds through a series of interrelated steps that narrow the focus of the study so that concepts, methods and procedures are well defined. A good research plan accounts for all these steps in advance. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 - At the outset a researcher specifies the meaning of the concepts or variables to be studied (conceptualizations), chooses a research method or methods, and specifies the population to be studies and if applicable how it will be sampled. - The researcher operationalizes the proposed concepts by stating precisely how the variables in the study will be measured. Research then proceeds through observation, processing the data, analysis and application such as reporting the results and assessing their implication. The research proposal - A research proposal provides a preview of why a study will be undertaken and how it will be conducted. Researchers must often get permission or necessary resources in order to proceed with a project. Even when not required a proposal is a useful device for planning The ethics of research design - You research design should indicate how your study will abide by the ethical strictures of social research - It may be appropriate for an institutional review board to review your research proposal. Gilbert, Chapter 3: Researching social life The way a research question is formulated is crucial in drawing together the underlying philosophical approach and conceptualisation, with the design of a project, and its methodology and methods. When making a research question think of: - What recent reading has prompted your interest in particular areas of research? - Are there current policy-related or perceived social problems that you are particularly interested in? - Have there been issues in the news recently that have particularly caught your attention, and that you want to research further? - Are there issues you want to investigate that arise from aspects of your own experience in the social world? - What observations have you noted recently that might deserve further investigation? For a research project, you need a question or problem that provides direction for the project, that defines the course of the investigation, and that sets boundaries on the research. A research question must be researchable, it should have six properties: 1. Interesting 2. Relevant 3. Feasible (realiseerbaar, bronnen, tijd, geld) 4. Ethical 5. Concise (beknopt, niet te lang specifiek ) 6. Answerable The research question should be stated in a way that the project is feasible, and has specific boundaries that make the project delimited and doable. When you have an idea, you must check the ethical dimensions of the research question. A research question must be concise. It must be clear and precisely written. Of course, the research question should also be answerable. Process of formulating a research question: 1. Go large brainstorming sessions, test these questions against what you already know and look which questions already have answers. Then use concept mapping, a process whereby both logical and creative visual association allows the researcher to consider links and relationships between different concepts. 2. Narrowing the list reconsider your list, look whether there are sub-questions or not. Rank the questions you’ve got left. Keep in mind that it should be answerable. 3. Refining the questions make sure the terms you use in your research question are identified and explained. General pattern: inductive reasoning. Hypothesis = a conjecture about relationships between relevant variables, cast as a statement that is testable. It provides a clear proposition of what might be the case, that is then subjected to verification via empirical investigation. 4. Review ask: a. Are all the questions you have decided upon essential? b. Are you able to identify the objectives of the research in each question? clear about meanings and terms c. Revisit the checklist of a researchable social research question. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 It is important to remember that particular research strategies are good or bad to the exact degree that they fit with the questions at hand. The perspectives you will adopt and the methods you will use need to be as fluid, flexible, and as eclectic as is necessary to answer the questions posed. Chapter 5: Social Measurement Measurement = careful, deliberate observations of the real world for the purpose of describing objects and events in terms of the attributes composing a variable. Conceptualization = the process of coming to an agreement about what terms means. Concept = result of conceptualization. Three classes of things that scientists measure: 1. Direct observables: those things we can observe rather simply and directly (colour of apple etc.) 2. Indirect observables: requires ‘relatively more subtle, complex, or indirect observations’. 3. Constructs: theoretical creations that are based on observations but that cannot be observed directly or indirectly. (IQ) Concepts = a family of conceptions. a construct, something we create. Concepts are constructs derived by mutual agreement from mental images (conceptions). Our conceptions summarize collections of seemingly related observations and experiences. Reification: regarding constructs as real, but constructs aren’t actually real they are just useful to make sense of things Conceptualization = the mental process whereby fuzzy and imprecise notions (concepts) are made more specific and precise. Conceptualization: gives definite meaning to a concept by specifying one or more indicators of what we have in mind. Indicator = sign of the presence or absence of the concept we’re studying. (observation that we choose to consider as a reflection of a variable we wish to study) Example: giving food to homeless people is that an indicator of compassion? Dimension: a specifiable aspect of a concept. making groups Example: economic dimension of compassion, social, political dimension If you totally disagree on the value of the indicators: study them all. Interchangeability of indicators: means that if several different indicators all present, to some degree, the same concept, then all of them will behave the same way that the concept would behave if it were real and could be observed. Real definition (reification) = not a stipulation determining the meaning of some expression but a statement of the ‘essential nature’ or the ‘essential attributes’ of some entity. it mistakes a construct for a real entity. Nominal definition = simply assigned to a term without any claim that the definition represents a ‘real’ entity. arbitrary. Operational definition = specifies precisely how a concept will be measured the operations we choose to perform. achieves maximum clarity about what a concept means in the context of a given study. Measurement steps: 1. Conceptualization what are different meanings 2. Nominal definition we define as: 3. Operational definition how do you measure 4. Measurements in the real world real life Clear and precise definitions are important for descriptive research. Not that important for explanatory research. Conceptualization: the refinement and specification of concepts. Operationalization: the development of specific research procedures that will result in empirical observations representing those concepts in the real world. Every variable must have 2 important qualities: 1. The attributes composing the variable should be exhaustive. For the variable to have utility in research, we must be able to classify every Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 observation in terms of one of the attributes composing the variable (provide enough options to classify every option) 2. Attributes composing a variable must be mutually exclusive. We must be able to classify every observation in terms of one and only one attribute. (elke kan maar in een groep worden ingedeeld) Choices of operationalization: - Range of variation: researchers must be clear about the range of variation that interests them. The question is, to what extent are we willing to combine attributes in fairly gross categories? - Variations between the extremes: degree of precision, how fine you will make the distinctions among the various possible attributes composing a given variable.-\ - A note on dimensions: which dimension of a variable are you interested in? - Defining variables and attributes: attribute = characteristic or quality of something. Variable = logical sets of attributes. Levels of measurement: - Nominal measures: variables whose attributes have only the characteristics of exhaustiveness and mutual exclusiveness. (sex, religious, political party) a level of measurement describing a variable that has attributes that are merely different, as distinguished from ordinal, interval or ratio measures. o Although these attributes composing each of these variables are distinct from one another, they have no additional structures. o Nominal measures merely offer names or labels for characteristics. o All we can say about two people in terms of a nominal variable is that they are either the same or different. Example: birthplace or gender - Ordinal measures: variables with attributes we can logically rank-order. A level of measurement describing a variable with attributes we can rank-order along some dimension. Example: education, high-medium-low - Interval measures: a level of measurement describing a variable whose attributes are rank-ordered and have equal distances between adjacent attributes, no true zero point. composing variables, the actual distance separating those attributes does have meaning. o When comparing two people in terms of an interval variable, we can say they are different from one another (nominal) and that one is more than another (ordinal) Example: IQ-test, temperature - Ratio measures: most of the social science variables meeting the minimum requirements for interval measures also meet the requirements for ratio measures. the attributes composing a variable, besides having all the structural characteristics mentioned previously, are based on a true zero point. Example: age, weight - Implications of levels of measurement: is determined by the analytic uses you’ve planned for a given variable (quantitative or qualitative), as you keep in mind that some variables are inherently limited to a certain level. You can treat some variables as representing different levels of measurement. A variable representing a higher level, say ratio, can also be treated as representing a lower level of measurement, say ordinal. Some variables are inherently limited to a certain level. But try to use the highest level of measurement, you can always make it a lower level later, not the other way around. Example: considering age in groups such as young, middle-aged-old, which turns the ratio measurement into an ordinal measurement. Sometimes there is no single indicator that will give you the measure of a chosen variable. You’ll want to make several observations for a given variable. You can then combine the several pieces of information you’ve collected to create a composite measurement of the variable in question. Example: college performance isn’t dependent on just one indicator Measurements can be made with various degrees of precision. When you construct and evaluate your technical considerations, there are two important considerations 1. Reliability: is a matter of whether a particular technique, applied repeatedly to the same object, yields the same result each time. This does not ensure accuracy any more than does precision (do it often or do it really well) Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 a. Test-retest method: make the same measurement more than once. There must be the same respond if you measured it okay in the first round. b. Split-half method: using two sets to measure something this should give the same results because interchangeability of indicators c. Established measures: getting information from people is to use measures that have proved their reliability in previous research. d. Reliability of research workers: Measurement unreliability can also be generated by research works. However, even total reliability does not ensure that our measures measure what we think they measure 2. Validity: refers to the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration. We are actually measuring what we say we are measuring. a. Face validity: that quality of an indicator that makes it seem a reasonable measure of some variable. Example: the frequency of going to church is a reasonable indicator of religiosity b. Criterion-related validity = predictive validity: the degree to which a measure relates to some external criterion. Example: drivers’ license test is determined by the scores people get and the subsequent driving records driving records is the criteria c. Construct validity: the degree to which a measure relates to other variables (construct) as expected within a system of theoretical relationships. (logical relationships) d. Content validity: the degree to which a measure covers the range of meaning included within a concept. Reliability is a function of consistency. Validity is a function of shots being arranged around a bull’s eye. -Though the ultimate validity of a measure can never be proven, we may agree to its relative validity of a measure on the basis of face validity, criterion-related validity, content validity, construct validity, internal validation and external validation. -Specifying reliable operational definitions and measurements seem to rob concepts of their richness of meaning. Yet, the more variation and richness we allow for a concept, the more potential for disagreement on how it applies to a particular situation, thus reducing reliability. This dilemma explains the persistence of 2 difference approaches to social science; quantitative, nomothetic, structured techniques (ex. Surveys) and experiments on the one hand and qualitative, idiographic methods on the other hand (ex. Field research) -Social researchers should look to both colleagues and subjects as sources of agreement on the most useful meanings and measurements of the concepts they study. A tension is present between the criteria of reliability and validity, forcing a trade-off between the two. Measurement decisions can sometimes be judged by ethical standards. But it would be unethical to seek for a particular outcome of results deliberately through a biased definition of the issue. Main point’s chapter 5: - The interrelated processes of conceptualization, operationalization and measurement allow researchers to move from a general idea about what they want to study to effective and well-defined measurements in the real world. Measuring anything that exists - Conceptions are mental images we use as summary devices for bringing together observations and experiences that seem to have something in common. We use terms or labels to reference these conceptions. - Concepts are constructs: they represent the agreed-on meanings we assign to terms. Our concepts don’t exist in the real world, so they can’t be measured directly, but we can measure the things that our concepts summarize. Conceptualization - Conceptualization is the process of specifying observations and measurements that give concepts definite meaning for the purposes of a research study. - Conceptualization includes specifying the indicators of a concept and describing its dimensions. Operational definitions specify how variables relevant to a concept will be measured. Definitions in descriptive and explanatory studies - Precise definitions are even more important in descriptive than in explanatory studies. The degree of precision needed varies with the type and purpose of a study. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Operationalization choices - Operationalization is an extension of conceptualization that specifies the exact procedures that will be used to measure the attributes of variables. - Operationalization involves a series of interrelated choices: specifying the range of variation that is appropriate for the purposes of a study, determining how precisely to measure variables, accounting for relevant dimensions of variables, clearly defining the attributes of variables and their relationships and deciding on an appropriate level of measurement. - Researchers must choose from four types of measures that capture increasing amounts of information: nominal, ordinal, interval and ratio. The most appropriate level depends on the purpose of the measurement. - A given variable can sometimes be measured at different levels. When in doubt, researchers should use the highest level of measurement appropriate to that variable so they can capture the greatest amount of information. - Operationalization begins in the design phase of a study and continues through all phases of the research project, including the analysis of data. Criteria of measurement quality - Criteria of the quality of measures include precision, accuracy, reliability and validity. - Whereas reliability means getting consistent results from the same measure, validity refers to getting results that accurately reflect the concept being measured. - Researchers can test or improve the reliability of measures through the test-retest method, the split-half method, the use of established measures and the examination of work performed by research workers. - The yardsticks for assessing a measures validity include face validity, criterion-related validity, construct validity and content validity. - Creating specific, reliable measures often seems to diminish the richness of meaning our general concepts have. This problem is inevitable. The best solution is to use several different measures, tapping the various aspects of a concept. The ethics of measurement - Conceptualization and measurement must not be guided by bas or preferences for particular research outcomes. Chapter 6: Sampling Critical part of social science: deciding what to observe and what not. Process of selecting observations = sampling. History of sampling: - Started hand in hand with political polling. Sampling frame: list of who you are asking. Quota sampling = based on knowledge of the characteristics of the population being sampled: what proportion are men, what proportion are women, what proportions are of various incomes, ages, and so on. Quota sampling selects people to match a set of these characteristics: the right number of poor, white, rural men. Quotas are based on those variables most relevant to the study. Two types of sampling methods: 1. Nonprobability sampling: any technique in which samples are selected in some way not suggested by probability theory. (reliance on available subjects as well as purposive, snowball, and quota sampling). a. Reliance on available subjects = convenience / haphazard sampling. Common method for journalists. Extremely risky sampling method. i. Does not permit any control over the representativeness of a sample ii. Cannot be used when you want to generalize. b. Purposive or judgmental sampling select a sample on the basis of knowledge of a population, its elements, and the purpose of the study. = purposive sampling = judgmental sampling = a type of nonprobability sampling in which the units to be observed are selected on the basis of the researcher’s judgment about which ones will be the most useful or representative. c. Snowball sampling = a nonprobability-sampling method, often employed in field research, whereby each person interviewed may be asked to suggest additional people for interviewing. i. Refers to the process of accumulation as each located subject suggests other subjects. d. Quota sampling = a type of nonprobability sampling in which units are selected into a sample on the basis of pre-specified characteristics, so that the total sample will have the same distribution of characteristics assumed to exist in the population being studied. i. Matrix or table describing characteristics. ii. After making a matrix you proceed getting data from people having all the characteristics of a given cell. iii. Assign to all the people in a given cell a weight appropriate to their portion of the total population. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 1. Problems: a. Quota frame must be accurate and it’s often difficult to get up-to-date information for this purpose b. The selection of sample elements within a given cell may be biased even though its proportion of the population is accurately estimated. e. Selecting informants = members of the group who can talk directly about the group per se. -Nonprobability sampling has issues, it is particularly used in qualitative research projects. But researchers must take care to acknowledge the limitations of non probability sampling, especially regarding accurate and precise representations of populations. 2. Probability sampling= the general term for samples selected in accordance with probability theory, typically involving some random-selection mechanism. Specific types of probability include EPSEM, PPS, simple random sampling, and systematic sampling a. Conscious and unconscious sampling bias: the possibilities for inadvertent sampling bias are endless and not always obvious. Fortunately, several techniques can help us avoid bias. b. Representativeness = that quality of a sample of having the same distribution of characteristics as the population from which it was selected. By implication, descriptions and explanations derived from an analysis of the sample may be assumed to represent similar ones in the population. is enhanced by probability sampling and provides for generalizability and the use of inferential statistics. a sample is representative of the population from which it is selected if the aggregate characteristics of the sample closely approximate those same aggregate characteristics in the population. i. Must be representative in all respects: representativeness concerns only those characteristics that are relevant to the substantive interests of the study. You may not know in advance which characteristics are relevant. ii. EPSEM= equal probability of selection method = a sample design in which each member of a population has the same chance of being selected into the sample. iii. Element = that unit of which a population is composed and which is selected in a sample. Distinguished from units of analysis, which are used in data analysis. the unit where you collected information from. iv. Population = the theoretically specified aggregation of the elements in a study. v. Study population = that aggregation of elements from which a sample is actually selected. c. Random selection: ultimate purpose of sampling to select a set of elements from a population in such a way that descriptions of those elements accurately portray the population from which the elements are selected. i. Random selection: a sampling method in which each element has an equal chance of selection independent of any other event in the selection process. ii. Sampling unit: that element or set of elements considered for selection in some stage of sampling. Reasons for using: 1. This procedure serves as a check on conscious or unconscious bias on the part of the researcher. 2. Random selection offers access to the body of probability theory, which provides the basis for estimating the characteristics of the population as well as estimates of the accuracy of samples. d. Probability Theory is a branch of mathematics that provides the tools researchers need provides the basis for estimating the parameters (the summary description of a given variable in a population) of a population. i. To devise sampling techniques that produce representative samples (avoid bias) ii. To statistically analyse the results of their sampling (permits estimates of error) Probability theory tells us about the distribution of estimates that would be produced by a large number of such examples. - Statistic = the summary description of a variable in a sample, used to estimate a population parameter. Parameter = what it would be in the whole population, like the mean age. Important rules regarding the sampling distribution: - If many independent random samples are selected from a population, the sample statistics provided by those samples will be distributed around the population parameter in a known way. - Probability theory gives us a formula for estimating how closely the sample statistics are clustered around the true value. To put it another way, probability theory enables us to estimate the sampling error = the degree of error to be expected in Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 probability sampling. The formula for determining sampling error contains three factors: the parameter, the sample size, and the standard error: o S= P and Q = population parameters, P = approve, Q= disprove. N = number of participants. Standard error= valuable piece of information because it indicates the extent to which the sample estimates will be distributed around the population parameter = standard deviation. Two key components of sampling-error estimates: - Confidence level the estimated probability that a population parameter lies within a given confidence interval, thus we might be 95 percent confident that between 35 and 45 percent of all voters favour Candidate A. - Confidence interval the range of values within which a population parameter is estimated to lie. Sampling frame = the list or quasi list of units composing a population from which a sample is selected. If the sample is to be representative of the population, it is essential that the sampling frame includes all (or nearly all) members of the population. Main guidelines for sampling frame: - Findings based on a sample can be taken as representing only the aggregation of elements that compose the sampling frame. - Often, sampling frames do not truly include all the elements their names might imply. Omissions are almost inevitable. Thus, a first concern of the researcher must be to assess the extent of the omissions and to correct them if possible. - To be generalized even to the population composing the sampling frame, all elements must have equal representation in the frame. Typically, each element should appear only once. Elements that appear more than once will have a greater probability of selection, and the sample will, overall, over-represent those elements. You seldom choose simple random sampling, because: - With all but the simplest sampling frame, simple random sampling is not feasible. - Simple random sampling may not be the most accurate method available. Simple random sampling = a type of probability sampling, in which the units composing a population are assigned numbers. A set of random numbers is then generated, and the units having those numbers are included in the sample. difficult mathematics. Systematic sampling = a type of probability sampling in which every kth unit in a list is selected for inclusion in the sample. You compute k by dividing the size of the population by the desired sample size. K is called the sampling interval. E.g. every tenth element is selected into the sample - Sampling interval: standard distance between elements selected in the sample: population size/ sample size = k - Sampling ratio = the proportion of elements in the population that are selected: sample size / population size. Danger of systematic sampling: the arrangement of elements in the list can make systematic sampling unwise. Such an arrangement is usually called periodicity and can cause bias. Lists should be randomized before systematic sampling. Still this method is superior to simple random sampling Stratification: the grouping of the units composing a population into homogeneous groups (or strata) before sampling. This procedure, which may be used in conjunction with simple random, systematic or cluster sampling, improves the representativeness of a sample, at least in terms of the variables used for stratification. not an alternative to other methods, rather, a possible modification of their use. researchers ensure that appropriate numbers of elements are drawn from homogeneous subsets of that population. Example: stratifying years from high school so all the years are represented in the sample (year 1, 2, ….) Cluster sampling = a multistage sampling in which natural groups (clusters) are sampled initially, with the members of each selected group being subsampled afterwards. Example: the researcher might select a sample of U.S. colleges and universities from a directory, get lists of the students at all the selected schools, then draw samples of students from each. - Multistage cluster sampling involves the repetition of two basic steps: listing and sampling. - Although this method is highly efficient, this method is less accurate. There are 2 sampling errors: Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 o The initial sample of clusters will represent the population of clusters only within the range of the sampling error. o The sample of elements selected within a given cluster will represent all the elements in that cluster only within the range of the sampling error. o A multistage sample design is subject to a sampling error at each stage. - General guideline: maximize the number of clusters selected while decreasing the number of elements within each cluster. The efficiency of cluster sampling is based on the ability to minimize the listing of population elements. - Stratification techniques can be used to refine and improve the sample being selected: o Once the sampling units have been grouped according to the relevant stratification variables, either simple random or systematic sampling techniques can be used to select the sample. o To the extent that clusters are combined into homogeneous strata, the sampling error at this stage will be reduced. Whenever the arrangement of the list creates an implicit stratification systematic sampling is better to use Cluster sampling = a multistage sampling in which natural groups (clusters) are sampled initially, with the member of each selected group being subsampled afterwards. it’s either impossible or impractical to compile an exhaustive list of the elements composing the target population, such as all church members in the US. Example of a more complex design: - Sample blocks - List the households on each selected block - Sample the households - List the people residing in each household - Sample the people within each selected household Cluster sampling: selecting a random or systematic sample of clusters and then a random or systematic sample of elements within each cluster selected. PPS = probability proportionate to size = this refers to a type of multistage cluster sample in which clusters are selected, not with equal probabilities but with probabilities proportionate to their sizes – as measured by the number of units to be subsample. Weighting = assigning different weights to cases that were selected into a sample with different probabilities of selection. In the simplest scenario, each case is given a weight equal to the inverse of its probability of selection. When all cases have the same chance of selection, no weighting is necessary. May sample subpopulations disproportionately to ensure sufficient numbers of cases from each for analysis. Main points chapter 6: - Social researchers must select observations that will allow them to generalize to people and events not observed. Often this involves sampling, a selection of people to observe. - Understanding the logic of sampling is essential to doing social research A brief history of sampling - Sometimes you can and should select probability samples using precise statistical techniques, but at other times non probability techniques are more appropriate. Nonprobability sampling - Nonprobability-sampling techniques include reliance on available subjects, purposive(judgmental) sampling, snowball sampling and quota sampling. In addition, researchers studying a social group may make use of informants. Each of these techniques has its uses, but none of them ensures that the resulting sample will be representative of the population being sampled. The theory and logic of probability sampling - Probability-sampling methods provide an excellent way of selecting representative samples from large, known populations. These methods counter the problems of conscious and unconscious sampling bias by giving each element in the population a known (nonzero) probability of selection. - Random selection is often a key element in probability sampling Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 - The most carefully selected sample will never provide a perfect representation of the population from which it was selected. There will always be some degree of sampling error - By predicting the distribution of samples with respect to the target parameter, probability sampling methods make it possible to estimate the amount of sampling error expected in a given sample. - The expected error in a sample is expressed in terms of confidence levels and confidence intervals. Populations and sampling frames - A sampling frame is a list or quasi list of the members of a population. It is the resource used in the selection of a sample. A sample’s representativeness depends directly on the extent to which a sampling frame contains all the members of the total population that the sample is intended to represent. Types of sampling designs - Several sampling designs are available to researchers - Simple random sampling is logically the most fundamental technique in probability sampling, but it is seldom used in practice. - Systematic sampling involves the selection of every kth member from a sampling frame. This method is more practical than simple random sampling and, with a few exceptions, is functionally equivalent. - Stratification, the process of grouping the members of a population into relatively homogeneous strata before sampling improves the representativeness of a sample by reducing the degree of sampling error. Multistage cluster sampling - Multistage cluster sampling is a relatively complex sampling technique that is frequently used when a list of all the members of a population does not exist. Typically, researchers must balance the number of clusters and the size of each cluster to achieve a given sample size. Stratification can be used to reduce the sampling error involved in multistage cluster sampling. - Probability proportionate to size (PPS) is a special, efficient method for multistage cluster sampling. - If the members of a population have unequal probabilities of selection into the sample, researchers must assign weights to the different observations made, in order to provide a representative picture of the total population. Basically, the weight assigned to a particular sample member should be the inverse of its probability of selection. Probability sampling in review - Probability sampling remains the most effective method for the selection of study elements because 1, it allows researchers to avoid biases in element selection and 2, it permits estimates of error The ethics of sampling - Probability sampling always carries a risk of error: researchers must inform readers of any errors that might make results misleading. - When nonprobability-sampling methods are used to obtain the breadth of variations in a population, researchers must take care not to mislead readers into confusing variations with what’s typical in the population. Chapter 7: Social experiments Controlled experiment = a research method commonly associated with the natural sciences in a lab Experiments involve: taking action and observing the consequences of that action. Natural experiments = experiments which occur in the regular course of social events. Experiments involve three major pairs of components: - Independent and dependent variables affect the independent variable on the dependent variable. This stimulus is a dichotomous variable = present or not present. - Pretesting = the measurement of a dependent variable among subjects before they are exposed to a stimulus representing an independent variable and post testing = the re-measurement of a dependent variable among subjects after they have been exposed to a stimulus representing an independent variable. problem of validity, the act of studying something may change it - Experimental group = in experimentation, a group of subjects to whom an experimental stimulus is administered and control groups = in experimentation, a group of subjects to whom no experimental stimulus is administered and who resemble the experimental group in all other respects. The comparison of the control group and the experimental group at the end of the experiment points to the effect of the experimental stimulus. Double-blind experiment= an experimental design in which neither the subjects nor the experimenters know which is the experimental group and which is the control group. Selecting subjects: Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Randomization = a technique for assigning experimental subjects to experimental and control groups randomly. (think back to simple probability sampling method assigning numbers and then randomly picking) -Another way to achieve comparability between the experimental and control groups is through matching = (similar to quota sampling) in connection with experiments, the procedure whereby pairs of subjects are matched on the basis of their similarities on one or more variables, and one member of the pair is assigned to the experimental group and the other to the control group. Rather use randomization than matching if you don’t know the variables beforehand, avoids bias. On the other hand, randomization only makes sense if you have fairly large amounts of subjects. Variations on experimental design Pre-experimental = do not meet the scientific standards of experimental designs, and sometimes they may be used because the conditions for full-fledged experiments are impossible to meet. - One shot case study of a single group of subjects is measured on a dependent variable following the administration of some experimental stimulus. - One group pre-test/post-test design suffers from the possibility that some factor other than the independent variable might cause a change between the pre-test and post-test results. - Static-group comparison shows something to one group but not to another and then measures the same variable in one group. SEE EXAMPLE IN BOOK P. 256 TO CLARIFY Validity issues in experimental research: - Sources of internal invalidity = refers to the possibility that the conclusions drawn from experimental results may not accurately reflect what went on in the experiment itself when anything else than the experimental stimulus can affect the dependent variable o History historical events may occur that confound the experimental results o Maturation continually growing and changing, and such changes affect the results of the experiment, the fact that the subjects grow older can have an effect o Testing often the process of testing and retesting influences people’s behaviour, thereby confounding the experimental results. o Instrumentation the process of measurement in pre-testing and post-testing brings in some of the issues of conceptualization and operationalization might not be the same for pre- and post-testing o Statistical regression sometimes it’s appropriate to conduct experiments on subjects who start out with extreme scores on the dependent variable , the danger lies here in the fact that subjects will definitely change because they were so extreme in the beginning o Selection biases comparisons have no meaning unless the groups are comparable at the start of an experiment o Experimental mortality is a more general and less extreme problem, the subjects may not want to participate anymore and the experiment basically fails. o Demoralization feelings of deprivation within the control group may result in their giving up. - Sources of external invalidity = refers to the possibility that conclusions drawn from experimental results may not be generalizable to the ‘real’ world. o The generalizability of experimental findings is jeopardized, as authors point out, if there’s an interaction between the testing situation and the experimental stimulus. o Campbell and Stanley designed the Solomon four-group design which addresses the problem of testing interaction with the stimulus. It avoids the risk that pretesting will have an effect on subjects. It also provides data for comparisons that will reveal the amount of such interaction that occurs in the al design. o Campbell and Stanley designed the post-test-only group design which is a randomized assignment of subjects to experimental and control groups. (so only group 3 and 4 of the Solomon group, so without a pre-test) The subjects will be initially comparable on the dependent variable. Field experiment = a formal experiment conducted outside the laboratory, in a natural setting. -The World Wide Web has become an increasingly common vehicle for performing social scientific experiments. Natural experiments often occur in the course of social life in the real world, and social researchers can implement them in somewhat the same way they would design and conduct laboratory experiments. The advantage of a controlled experiment lies in the isolation of the experimental Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 variable’s impact over time (think the basic experimental model with one experimental and one control group there is no external factor that can influence the results such as watching news at home). Further, because individual experiments are often rather limited in scope, requiring relatively little money and time, we often can replicate a given experiment several times using many different groups and subjects. The greatest weakness of laboratory experiments lies in their artificiality. Social processes that occur in a laboratory setting might not necessarily occur in more natural social settings. Ethics: Experiments almost always involve deception. It is important to determine whether a particular deception is essential to the experiment and whether the the value of what may be learned from the experiment justifies the ethical violation. Experiments typically intrude on the lives of the subject. Researchers must balance the potential value of the research against the potential damage to the subjects. Main points chapter 7: - In experiments, social researchers typically select a group of subjects, do something to them, and observe the effect of what was done. Topics appropriate for experiments - Experiments provide an excellent vehicle for the controlled testing of causal processes The - The tests the effect of an experimental stimulus (the independent variable) on a dependent variable through the pretesting and posttesting of experimental and control groups - It’s generally less important that a group of experimental subjects be representative of some larger population than that experimental and control groups be similar to each other. - A double-blind experiment guards against experimenter bias because neither the experimenter nor the subject knows which subjects are in the control and experimental groups. Selecting subjects - Probability sampling, randomization, and matching are all methods of achieving comparability in the experimental and control groups. Randomization is the generally preferred method. In some designs it can be combined with matching Variations on experimental design - Campbell and Stanley describe three forms of pre experiments: the one-shot case study, the one-group pretest-posttest design, and the static-group comparison. - Campbell and Stanley list, among others, eight sources of internal invalidity in experimental design: history, maturation, testing, instrumentation, statistical regression, selection biases, experimental mortality and demoralization. The with random assignment of subjects guards against each of these. - Experiments also face problems of external invalidity, in that experimental findings might not reflect real life. - The interaction of testing with the stimulus is an example of external invalidity that the does not guard against - The Solomon four-group design and other variations on the can safeguard against external invalidity. - Campbell and Stanley suggest that, given proper randomization in the assignment of subject to the experimental and control groups, there is no need for pretesting in experiments. Examples of experimentation - In a controlled field experiment, researchers exposed the Pygmalion effect as one phenomenon that researchers must account for in experimental design. - One recent experiment in a laboratory setting showed that a ‘motherhood penalty’ exists in the work wold. Web-based experiments - The world wide web has become an increasingly common vehicle for performing social science experiments ‘natural’ experiments - Natural experiments often occur in the course of social life in the real world, and social researchers can implement them in somewhat the same way they would design and conduct laboratory experiments. Strengths and weaknesses of the experimental method - Like all research methods, experiments have strengths and weaknesses. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 - The primary weakness of experiments is artificiality: what happens in an experiment may not reflects what happens in the outside world. - The strengths of experimentation include the isolation of the independent variable, which permits causal inferences; the relative ease of replication; and scientific rigor Ethics and experiments - Experiments typically involve deceiving subjects - By their intrusive nature, experiments open the possibility of inadvertently causing damage to subjects. Chapter 8: Survey Research Surveys may be used for descriptive, explanatory, and exploratory purposes. They are used in studies that have individual people as the units of analysis. Respondent = a person who provides data for analysis by responding to a survey questionnaire. Questionnaire = a document containing questions and other types of items designed to solicit information appropriate for analysis. primarily in survey research but also in experiments, field research, and other modes of observation. Options for question forms: - Questions and statements ; also many statements in questionnaires. The researcher (quantitative) wants to determine the extent to which respondents hold a particular attitude or perspective. - Open-ended and closed-ended questions open-ended = questions for which the respondent is asked to provide his or her own answers. Closed-ended = questions for which the respondent is asked to select an answer from among a list provided by the researcher. provide a greater uniformity of responses and are more easily processed than open-ended questions. o Construction of closed-ended questions should be guided: ▪ The response categories should be exhaustive (enough to include every answer in a category): include all the possible answers. ▪ Answer categories must be mutually exclusive (every answer should fit into one category): should not feel compelled to select more than one. DON’T: Double-barrelled questions: ask for a single answer to a question that actually has multiple parts. when the word appears, check whether you’re asking a double-barrelled question. Questions you use for your questionnaire should be relevant, and short items are the best. Avoid negative items (example: people should not go to that place) people will read over the negative word and then respond incorrectly. There are no true meanings for any of the concepts we typically study in social science. The meaning of someone’s response to a question depends in large part on its wording. Bias = the quality of measurement device that tends to result in a misrepresentation, in a particular direction, of what is being measured. Example: when using an institutional name or brand, people might be more likely to give a biased response Whenever we ask people for information, they answer through a filter of what will make them look good =social desirability of questions General questionnaire format: The format is as important as the wording of the questions. As a general rule, a questionnaire should be spread out and uncluttered. Formats for respondents: boxes adequately spaced apart provide the best format for respondents. This method has the advantage of specifying the code number to be entered later in the processing stage. Certain questions will be relevant to some of the respondents and irrelevant to others. Contingency question = a survey question intended for only some respondents, determined by their responses to some other question. Example: all respondents might be asked whether they belong to the Cosa Nostra, and only those who said yes would be asked how often they go to meetings etc. Matrix questions: A matrix is appropriate when several questions with the same set of answer categories are asked. It has several advantages: - It uses space efficiently. - Respondents will probably complete such a set of questions more quickly than other formats. - This format may increase the comparability of responses given to different questions. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 It also has some dangers: - Its advantages might encourage you to structure an item so that the responses fit into the matrix format, when a different set of responses might be more appropriate. - The matrix can foster a response set among some respondents, they might develop a pattern. Ordering items in a questionnaire - The appearance of one question can affect the answers given to later ones. Example: if respondents are asked to assess their overall religiosity, their respondents to later questions concerning specific aspects of religiosity will be aimed at consistency with the prior assessment. - The impact of item order is not uniform among respondents (some people will be affected by the order, others not at all). Some researchers attempt to overcome this by randomizing the order of items, but this can lead to a chaotic questionnaire. - The best way is sensitivity to the problem. Try to estimate what the effect will be so that the results can be interpreted meaningfully. In addition, a questionnaire should be pre-tested. Questionnaire instructions: - A questionnaire should contain instructions and introductory comments. It’s useful to begin with basic instructions for completing it. Then, for every subsection a short statement containing its purpose and content should be provided. Somewhere the questionnaire should contain demographic questions (usually at the end). Pretesting the questionnaire: - The questionnaire should be pretested to avoid error. The three main methods administering survey questionnaires to a sample of respondents are: 1. Self-administered questionnaires in which respondents are asked to complete the questionnaire themselves. 2. Surveys administered by interviewers in face-to-face encounters. 3. Surveys conducted by telephone. The basic method for collecting data through the mail has been to send a questionnaire accompanied by a letter of explanation and a self-addressed, stamped envelope. The researcher needs to make the return of the form as easy as possible. Stamps are cheaper if a lot of questionnaires are replied, but business-reply permits are cheaper if fewer are returned. Thinking about the cost, and which method to use is important. When questionnaires are returned they should get an ID number. Serialized ID numbers can be valuable in estimating nonresponse biases in the survey. Follow up mailing can be administered in several ways. The best method is to send a new copy of the survey with the follow-up letter. Researchers most often send follow up mailing to every participant, thanking the ones that send the questionnaire back and encouraging those we didn’t. Response rate = the number of people participating in a survey divided by the number selected in the sample, in the form of a percentage. This is also called to completion rate or, in self-administrated surveys, the return rate: the percentage of questionnaires sent out that are returned. -The overall response rate is one guide to the representativeness of the sample respondents. Interview = a data-collection encounter in which one person (an interviewer) asks questions of another (a respondent). Interviews may be conducted face-to-face or by telephone. Advantages of interview survey over mail survey: An interview survey has typically higher response rates than do mail surveys. The presence of an interviewer also decreases the answers of don’t know and no answer. Interviewers can also guard against questionnaires items that are confusing. Finally, the interviewer can observe respondents as well as ask questions. Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Guidelines for survey interviewing: - Appearance and demeanour o Interviewers should dress similarly to their interviewees. The interviewer should be pleasant. The interview will be more successful if the interviewer can become the kind of person the respondent is comfortable with. - Familiarity with questionnaire o The interviewer must be able to read the questionnaire items to respondents without error, without stumbling over words. The interviewer must be familiar with the specifications prepared in conjunction with the questionnaire. - Follow question wording exactly - Recording responses exactly o This exactness is especially important because the interviewer will not know how the responses are to be decoded. Researchers can use any marginal comments explaining aspects of the response not conveyed in the verbal recording. - Probing for responses o Probe = a technique employed in interviewing to solicit a more complete answer to a question. It is a nondirective phrase or question used to encourage a respondent to elaborate on an answer. o Often interviewers need to probe for answers that will be more sufficiently informative for analytical purposes. The probes must be neutral. And all interviewers must use the same probes when they’re needed. - Coordination and control o Most interview surveys require the assistance of several interviewers. Their efforts must be controlled. It is important to train the interviewers and supervise them. It is also appropriate to prepare specifications = explanatory and clarifying comments about handling difficult or confusing situations that may occur with regard to particular questions in the questionnaire. o The interviewers must be familiar with the research, familiar with the questions, practice the questions, demonstrate an interview, pre-test some real interviews. The biggest disadvantage of telephone surveys is that it is limited to people who have telephones. In addition, money and time are disadvantages. An advantage is that you can dress any way you want without influencing the respondents. Also, it provides safety for both parties. This method is hampered by the proliferation of bogus surveys that are actually sales campaigns disguised as research. Also, it is easy for participants to drop out and hang up the phone. -Computers are also changing the nature of telephone interviewing. One innovation is computer-assisted telephone interviewing (CATI). This method is increasingly used by academic, governmental, and commercial survey researchers. CATI automatically prepares the data for analysis. The researcher can begin analysing the data before the interviewing is complete. Many of the new technologies affecting people’s lives also open new possibilities for survey research. Some technologies are:computer-assisted self-interviewing (CASI), computerized self-administered questionnaire (CSAQ), touchtone data entry (TDE), voice recognition (VR) -The new technology of survey research includes the use of the Internet. Online surveys concern representativeness: will the people who can be surveyed online be representative of meaningful populations? -Advantages of survey research: Surveys are useful in describing the characteristics of a large population. Surveys make large samples feasible. Surveys are flexible. Standardized questions have an important strength to measurement generally. Survey research also has several weaknesses. Survey research cannot deal with the context of social life. Surveys typically require that an initial study design remains unchanged. Surveys are subject to the artificiality mentioned in connecting with experiments. Surveys cannot measure social action, they can only collect self-reports of recalled past action. The problem of artificiality has two aspects: - The topic of study may not be amenable to measurement through questionnaires. - The act of studying that topic may affect it (someone may never even have thought about the topic) Survey research is weak on validity and strong on reliability. Survey research consists of: - Questionnaire construction - Sample selection - Data collection Gedownload door Félix Karl ([email protected]) lOMoARcPSD|16481989 Secondary analysis = a form of research in which the data collection is processed by one researcher and is re-analysed, often for different purposes by another researcher. This is especially appropriate in the case of survey data. Data archives are repositories for the storage and distribution of data for secondary analysis. Through secondary analysis the researcher can pursue their particular social research interest while avoiding the enormous expenditure of time and money such as survey entails. The key advantage of secondary analysis is that it’s cheaper and faster than doing original surveys and the researcher may benefit from the work of topflight professionals. The ease of secondary analysis has also enhanced the possibility of meta-analysis = a researcher brings together a body of past research on a particular topic. A disadvantage is the recurrent question of validity. When one researcher collects data for one particular purpose, there is no assurance that those data will be appropriate for other research. Ethics: always respect confidentiality, avoid psychological harm Main point’s chapter 8: - Survey research, a popular social research method, is the administration of questionnaires to a sample of respondents selected from some population. Topics appropriate for survey research - Survey research is especially appropriate for making descriptive studies of large populations; survey data may be used for explanatory purposes as well. - Questionnaires provide a method of collecting data by o Asking people questions o Asking them to agree or disagree with sta