Summary

These are notes for a final exam, covering the topics of research orientations, quantitative methods, qualitative methods, and limitations of the quantitative and qualitative approaches to research. It provides an overview of concepts and methodologies within those areas.

Full Transcript

Week 2: Research Orientations How Do We Know What We Know?  Ways of knowing include experience, authority, tradition, and intuition.  Many of these rely on cognitive logic processes, which can be flawed.  Epistemology is a question of how we know what we know.  Necessary...

Week 2: Research Orientations How Do We Know What We Know?  Ways of knowing include experience, authority, tradition, and intuition.  Many of these rely on cognitive logic processes, which can be flawed.  Epistemology is a question of how we know what we know.  Necessary and sufficient conditions for knowledge must be determined. Social Science Epistemology  Knowledge in social science is based on that which can be reliably observed.  Epistemology focuses on what counts as knowledge in social science. Social Scientific Inquiry  Different research questions require different methods.  Scientific/Quantitative methods include surveys and experiments.  Interpretive/Qualitative methods include interviews, focus groups, and ethnography/participant observation. Social Scientific Theory  Quantitative/Deductive theory aims to create testable and falsifiable hypotheses.  Theoretical constructs in quantitative theory must be observable.  Interpretive/Inductive theory seeks plausible interpretations of social data. Hypotheses Testing  Hypotheses propose relationships between variables.  Hypotheses commonly take the form of H1 (X is related to Y) or H2 (Group A will differ from Group B on X).  Examples include communication apprehension affecting public speaking grades or social media use affecting mediated activity time. Quantitative Methodology  A quantitative methodology aims to produce unbiased and replicable knowledge.  Methodologies produce quantifiable results, ideally generalizable. Elements of Quantitative Research  Understanding what is being studied (conceptualization).  How the study will be conducted (operationalization).  Who is participating (sampling).  Clarity of data collection methods (replicability).  Confidence in findings. Variables  Variables are entities that take on different values.  Variables are the building blocks of research design. Conceptualization  Refines and specifies abstract concepts.  Conceptualizations are working agreements, not dictionary definitions. Language - The Problem  Key concepts in communication studies (e.g., love, leadership, satisfaction, control, culture, etc.) can be problematic because language can create confusions and limitations in communication. Conceptualization, Indicators, and Dimensions of Communication Apprehension  Examples of indicators and dimensions of Communication Apprehension include Nervousness, Butterflies in Stomach, Worry, Speechlessness, Fear, Hives/Blotchy Skin, Disfluencies, Clammy Hands, Sweaty, Faster heartbeat, Filled Pauses.  This concept has cognitive, physical and behavioral dimensions. Quantitative Approaches: Confidence in Findings  All studies are likely flawed. The "coffee problem" highlights how preconceived notions can bias interpretation of research findings.  Overall effects of an action (e.g., coffee consumption) may be complex, preventing clear conclusions from individual studies.  The Central Limit Theorem describes how sample statistics approximate population parameters.  Normal Distribution: Data cluster around the mean.  To gain confidence in quantitative findings: o Carefully conceptualize the variables being studied. o Precisely operationalize the variables. o Employ appropriate sampling techniques. o Ensure data collection methods allow for replication. What are Variables?  Variables are entities that take on diverse values.  Variables underpin research design.  Categorical variables (nominal): o Mutually exclusive categories; e.g., hair color, gender  Ordinal variables: o Ordered categories (e.g., paper grades); meaningful rank but not precise differences  Interval variables: o Ranked, meaningful differences, but no true zero point (e.g., temperature in Celsius)  Ratio variables (ideal): o Ranked, meaningful differences, and a meaningful zero point (e.g., height, weight).  Measure variables at the highest possible level whenever possible. Conceptualization of Variables  Conceptualization involves defining abstract concepts in a study-specific manner.  Conceptualizations are working agreements, not dictionary definitions.  Careful analysis of study variables is essential, especially for abstract concepts.  Indicators are elements that demonstrate the presence or absence of a concept.  Variables can be multi-dimensional (e.g., communication apprehension). Connecting Variables to Questions  Research questions propose relationships between variables, general in nature.  Hypotheses propose specific relationships between variables, usually in a directional form.  Hypotheses can be correlational (e.g., a relationship between variables) or comparative (e.g., differences between groups). Week 3: Measurements 1. Operationalization Definitions  Operationalization: The development of specific research procedures that result in empirical observations representing those concepts in the real world.  Measurement: Careful, deliberate observations to describe objects and events in terms of the attributes composing a variable. Turning Concepts into Numbers  Transforming Ideas into Measures: Assigning numbers to abstract ideas.  Examples: How to quantify concepts like "love" or "communication apprehension." Types of Operationalization in Social Sciences  Direct Observation:  Train Observers/Coders: Observing specific behaviors, such as where couples touch each other (e.g., arms, shoulders, hands, head).  Physical Measurements: Measuring physiological responses like heart rate, sweating, nervousness, and eye tracking.  Environmental Residue: Analyzing physical traces left by behavior, such as tracking app usage or using duct tape on the floor to see which art pieces are most viewed.  Textual & Narrative Data: Comparing content from sources like newspapers.  Survey/Self-Report Data:  Open-Ended Questions: Respondents provide their own answers.  Benefits: Rich, detailed data; not constrained by the researcher.  Drawbacks: Requires coding; may include irrelevant answers.  Closed-Ended Questions: Respondents choose from a provided list.  Benefits: Uniform responses; already in numeric format.  Drawbacks: May not capture the richness of experiences; structure may not match respondent experiences. Developing Measures  Indicators: Identifying indicators of your variable.  Crafting Items: Creating questions or items based on these indicators.  Composite Measures: Using multiple items to create a comprehensive measure.  Not all individual indicators are enough to determine if the variable is present, but together they provide a clearer picture. Example Scales  Likert Scale: Used for measuring attitudes or feelings.  Example: Communication Apprehension Scale (1 to 7 scale).  Semantic Differential: Measures attitudes using pairs of adjectives.  Example: Marital Opinion Questionnaire (1 to 7 scale).  Why 7?: Provides a definite middle point and allows for variance in data. An 8- point scale can include a "don't know" option. Choosing the Scale  Guidelines:  Measure data at the most sophisticated level possible.  Example: Age as a ratio variable.  Avoid using categorical/ordinal scales for ratio variables.  Write more items than needed to ensure reliability.  Use precise and clear language.  Keep items short.  Avoid jargon.  Use language the respondent understands.  Provide specific frames (e.g., "When thinking of...").  Avoid absolutes like "always" or "never" in items. Measure Creation Guidelines  Language: Use language that respondents understand.  Specific Frames: Provide specific contexts for questions.  Avoid Absolutes: Avoid words like "always" or "never."  Respondent Considerations:  Ensure respondents are willing and able to answer.  Be aware of social desirability bias.  Ensure respondents have the knowledge or experience to answer.  Consider if respondents can remember the information. 2. Sampling Sampling Terms  Sample: A subset of elements or units selected from a population.  Population: The entire group you wish to generalize the results to.  Types of Population:  All the people in the world.  Population of Canada.  Married couples.  Teenagers and screen time.  Sampling Elements: A single case in the population, often people, but could be organizations, messages, words, locations.  Why Not Use the Whole Population?: It becomes difficult to create an accurate census. Sample Recap  Subset of the Population: Generally, impossible to study everyone, so we study a smaller sample.  Goal: To get a sample that is representative of the population we wish to study. Probability Sampling (Gold Standard)  Random Sampling: Each population member has an equal chance of being selected.  Simple Random Sampling:  Establish a sampling frame → Determine size of sample → Use a randomized strategy to select elements.  Sampling Frame: List of units composing a population from which the sample is selected.  Sample Size: Bigger is better, but researchers must manage resources. A sample is big enough when it can detect a statistical effect at a desired significance level.  Systematic Sampling: Every kth element in the total list is chosen.  Sampling Interval: Distance between elements selected in the sample (Population size/Sample size).  Stratified Sampling: Population is divided into strata, and the sample is taken within strata until needed numbers are met.  Benefits of Random Sampling:  Less likely to be influenced by the researcher.  More likely to be representative of the population. Nonprobability Sampling  Why Use Nonprobability Sampling?: Probability sampling is theoretically best but practically impossible.  Convenience Sampling: Reliance on available participants, can introduce biases like nonrepresentative sample and self-selection.  Purposive Sample: Also called judgmental sample, specific people chosen based on the research question.  Snowball Sample: Collect data from a few known members of a target population and ask them to provide other members.  Quota Sampling: Participants are selected nonrandomly based on their known proportion in the population.  Are Nonprobability Samples OK?: Convenience sampling can contribute to valid inferences if representative and repeated (replication is key). Pragmatic Approaches to Increase External Validity  Specify the population → Consider bias → Measure demographics. Tips for Writing Survey Measures  Measure Data at the Most Sophisticated Level Possible.  Write More Items Than Needed: Some items may be "bad" items.  Reflect the Variable: Every item must reflect the variable.  Keep Items Short.  Avoid Double-Barreled Questions: E.g., "I try to be cheerful and pleasant."  Questions Should Be Clear:  Avoid jargon.  Use language the respondent understands.  Be specific and precise.  Provide specific frames (e.g., times, relationships).  Avoid using "always" or "never" (reasonable as anchor points in some cases). Week 4: Experimental Design Key Concepts  Experimental Design: Introducing some action/manipulation/treatment by the experimenter and observing the consequences.  Control: Essential for establishing causality.  Criteria for Causality:  X is related to Y.  X temporally precedes Y.  X and Y are not related through some third variable.  The relationship between X and Y is not spurious. True Experiments  Control Group: Helps control participant-related threats to internal validity (e.g., Hawthorne Effect, Placebo Effect, Maturation, Experimental Mortality).  Experimental Group: Also called an experimental condition or treatment condition.  Random Assignment: Ensures experimental and control groups are equivalent, eliminating selection bias.  Double-Blind Experiment: Neither participants nor research assistants know who is in the control or experimental group, reducing researcher-related threats to internal validity.  Manipulation Checks: Ensure the operationalization of the IV was as intended. Experimental Designs  Pretest-Posttest Control Group Design  Posttest Only Control Group Design  Solomon Four-Group Design Quasi-Experimental Design  Lack full experimental control.  Typically involve a comparison group that is not randomly assigned.  Types:  Time Series Design  Nonequivalent Control Group Design  Multiple Time Series Design Non-Random Assignment Designs  One-Shot Case Study  One-Group Pretest-Posttest  Static Group Comparison 2. Reliability and Validity Reliability  Definition: Consistency of a measure. If the same technique is applied repeatedly, it should yield the same result.  Techniques:  Test-Retest: Consistency over time.  Internal Consistency:  Split-Half Reliability: Correlation between two subsets of items.  Item-Total Reliability: Correlation between individual items and the total score.  Reliability Coefficients: Measure agreement between items (e.g., Cronbach’s alpha, with 0.70 as a general cutoff).  Confirmatory Factor Analysis: Ensures adequate fit. Validity  Definition: Accuracy of a measure. The measure should reflect what it intends to measure.  Types:  Content Validity:  Face Validity: Measure appears to represent the concept.  Expert Panel Validity: Experts evaluate the measure.  Criterion Validity:  Predictive Validity: Measure predicts future behavior.  Concurrent Validity: Measure correlates with similar measures.  Construct Validity:  Convergent Construct Validity: Measure correlates with related variables.  Divergent Validity: Measure does not correlate with different variables. 3. Threats to Validity Internal Validity Threats  Placebo Effect: Behavior change due to being in a study.  Hawthorne Effect: Behavior change due to being observed.  Maturation: Natural changes over time.  Mortality: Participants dropping out.  Researcher Threats:  Observer Bias: Researcher’s knowledge biases observations.  Experimenter Effect: Researcher’s behavior influences participants.  Researcher Attribute Effect: Researcher’s characteristics bias results. External Validity Threats  Ecological Validity:  Testing Interaction: Artificial setting affects results.  Selection Interaction: Sample differs from the population.  History Interaction: Results not generalizable due to the time period.  Addressing Ecological Validity:  Creative procedures.  Field experiments.  Noting limitations. Week 5: Limitations of Quantitative Research 1. Introduction  All Studies Are Wrong:  How wrong?  What are the limitations of the study?  How much confidence should we have in the results? 2. Robust Findings  Definition: Findings that can be consistently replicated.  Example: Relationship satisfaction correlates with commitment. 3. Quantitative Concerns Ecological Validity  Definition: Would the results hold up outside the research environment?  Threats to External Validity:  Testing Interaction: Does being in the experiment make participants non- representative?  Selection Interaction: Are the selected participants different from the general population?  History Interaction: Are the results influenced by the time period (e.g., effects of technology on teens during COVID-19)?  Addressing Ecological Validity:  Creative procedures.  Field experiments.  Noting limitations. Precision  Conceptualization: Is the variable clearly and logically defined?  Operationalization: Is there a gap between the conceptualization and the operationalization? Reliability  Internal Reliability: Are the measures consistent?  Indicators: Are all indicators measuring the same thing?  Cronbach’s Alpha: Should be 0.7 or higher.  Confirmatory Factor Analysis (CFA): High level of fit.  Example: Latent variable (e.g., Anger) and its indicators.  Measurement Errors: Difference between a measured quantity and its true value. Self-Report Biases  Social Desirability Bias: Participants may respond in a way they think is socially acceptable.  Example: Parking space experiment (measuring how long someone takes to leave a parking space).  Lack of Memory/Knowledge: Participants may not remember or know the information.  Perception: Participants' perceptions may influence their responses.  Good Participant Effect: Participants may guess the hypothesis and respond accordingly. Confounds  Definition: Other variables that could be driving the effect.  Example: Ice cream and murders study; a nice person trying to be mean to raise participants' cortisol levels. Sampling Concerns  Representativeness: Does the sample adequately represent the population?  Size/Power: Is the sample large enough to detect the effects?  Small/Underpowered Studies: May increase false positives.  Large Samples: Can identify very small effects that may not be substantively important.  Example: Study on autistic children and anime. Overstating the Implications  Causation: Research rarely "proves" something, especially in social sciences where falsifiability is key.  Correlation vs. Causation: Remember that correlation does not equal causation.  Effect Sizes: Consider significance and confidence intervals. Week 7: Research Ethics and REBs 1. Technology and Research Ethics Data Accessibility  Technology: Makes large datasets easily searchable, scrapable, and analyzable.  Ethical Considerations: Is it ethical to use this data? Informed Consent  Ensuring True Informed Consent:  Understanding Algorithms: Participants often do not understand algorithms (Hallinan, Brubaker, & Fiesler, 2019).  Reading Terms of Service: Participants rarely read terms of service (Obar & Oeldorf-Hirsh, 2020).  Public Data: Is the data considered public?  Participant Perspective: How would participants view the research if their content was included?  Risk Minimization Strategies:  Obtain informed consent if possible.  Delete names and other identifiable information (Note: Direct quotes from tweets and posts are identifiable).  Attempt to acquire informed consent prior to dissemination, especially if only a few people are identifiable. 2. Association of Internet Researchers Guidelines Internet Research  Types of Research:  Big Data from Scrapes/API  Social Listening  Social Analytics  Building Dashboards  Algorithm Design  User Experience  Semantic/Sentiment Analysis  Digital Rhetoric Internet Research Ethics 3.0  Venue/Platform Considerations:  Ethical expectations of the venue.  Possibility of obtaining informed consent.  Ethical and Legal Considerations:  Legal requirements for the researcher (e.g., REB in Canada).  Ethical assumptions of the subjects.  Risks and benefits of the research. Data Management  Data Handling:  Management, storage, and representation of data.  Securing sensitive data (e.g., information on self-harm or criminal activity).  Anonymization:  Ethical considerations of anonymizing data.  Data Minimization:  Collect and store only the necessary data to answer research questions. Legal Considerations  Compliance:  Adherence to Terms of Service.  Compliance with local laws.  Avoiding legal risks for users/participants.  Maintaining privacy for politically sensitive subjects. Benefits  Data Utility:  Can the data answer relevant questions?  Understanding the sample (data, users, etc.).  Proprietary nature of social media and internet data limits generalizability for independent researchers. 3. Whose Ethics? Ethics Legislation  Historical Context:  Tuskegee Syphilis Studies  The Stanford Prison Study  Milgram’s Obedience Studies Beyond the REB (Research Ethics Board)  REB Role:  Provides baseline ethical guidelines within a legal framework.  Protects participants and the university from liability.  Tied to federal research funding.  Research Outside Universities:  Market Research  Journalists  Social Media Platforms  User Experience  Opinion/Political Polling  Regulatory Bodies:  May be subject to other regulatory bodies or Codes of Conduct, or may not.  Ethical Reminder:  Just because you can, doesn’t mean you should. 4. Research Ethics in Canada Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans – TCPS 2 (2022)  Human Subjects Research:  Involves human beings.  Use of private information that can easily identify individuals.  Use of bodily materials. Three Principles  Respect for Persons:  Individuals are autonomous agents.  Persons with diminished autonomy are entitled to protection (e.g., youth, cognitive impairment, mental health issues).  Informed Consent: Based on a complete understanding of the research purpose, procedures, risks, and benefits.  Concern for Welfare:  Researchers must secure participants' well-being.  Maximize benefits and minimize harms.  Risks and benefits should be outlined in the informed consent.  Justice:  Fair distribution of risks and benefits.  Ensure no segment of the population is unduly burdened or denied benefits. Examples of Injustice  Research on poor ward patients benefiting private patients.  Research on Nazi concentration camp prisoners.  Tuskegee study on untreated syphilis in rural black men. Research Ethics Board (REB)  Role: Upholds the Tri-Council statement.  Composition: Panel of experts reviewing ethics applications prior to data collection. REB Considerations  Minimization of risks.  Reasonableness of risks compared to benefits.  Equitable selection of subjects.  Privacy and confidentiality procedures.  Data monitoring plan.  Informed consent process.  Safeguards for vulnerable populations. Week 8: Qualitative Approaches 1. Empirical Criteria for Qualitative Work Concepts  Theoretical Process: What are the theoretical processes?  Observations: What observations support these concepts?  Evolution of Concepts: Does the author show how the concepts evolve from the research?  Number of Concepts: How many concepts are there?  Systematic Relationships: Are those concepts systematically related? Conceptual Density  Development: Are the concepts well-developed?  Connections: Are the conceptual connections tightly linked? Variation  Range of Conditions: Were the concepts observed under a range of different conditions?  Theoretical Accounting: Is the variation accounted for by the theoretical concepts? Substantive  New Information: Does the study deliver new information?  Guidelines for Action: Does it produce guidelines for action?  Theory-Building: Are the results theory-building?  Contribution: What has society or the discipline gained by having this researcher do this project? 2. Qualitative Coding What are Codes?  Definition: Shorthand devices to label, separate, compile, and organize data. What is Coding?  Interpretation: Breaking data down into component parts and naming those parts. Types of Coding  Deductive Coding:  Source: Coding from a developed codebook.  Development: Developed from theory or extant research.  Process: Often indicative of a more quantitative process.  Inductive Coding:  Source: Emergent coding from the data.  Development: Theory develops from the codes.  Process: Often indicative of a more qualitative process. Iterative Process  Analysis: Takes place after some data has been collected, influencing further data collection. Stages of Coding  Open Coding:  Initial Code: Unrestricted initial coding.  Possibilities: Opens up possibilities in the data.  Tentative Interpretations: Initial interpretations are tentative.  Axial Coding:  Integration: Process of making connections between categories.  Overarching Theory: Brings previously separate categories together under an overarching theory or principle.  Selective Coding:  Depersonalization: Identifying properties of categories and constructs.  Exploration: Exploring attributes of categories along different dimensions. Code Books  Explanations: Provide explanations of what each code means.  Definitions: Definitions of codes are more abstract than data.  Touchstone: Serve as a reference for returning to coding or for multiple coders. Creating Interpretations  Building the Story: Researchers build the narrative of their data from the codes.  Interpretation: Different researchers may draw different conclusions, but conclusions should be plausibly supported by the data. 3. Qualitative Approaches Qualitative Ontology  Multiple Realities: The world consists of multiple, emergent realities that are always changing, formed through negotiations between the self and various people, objects, and events. Purpose of Qualitative Research  Explain and Describe: To further a detailed, rich, and thick understanding of human communication.  Focus: Crafting plausible or credible explanations of social processes and human behavior. Common Methods  Grounded Theory: Inductive process where theoretical explanations emerge from data/observations.  In-Depth Interviews: Detailed, personal interviews to gather deep insights.  Focus Groups: Group discussions to explore collective views.  Ethnography: Observing and studying people in their natural environment.  Outside Observer: Observing without participating.  Participant Observation: Observing while participating.  Autoethnography: Researcher’s personal experience as data.  Content/Discourse Analysis: Analyzing written or spoken communication. Research Outputs  Codes: Shorthand labels for data.  Categories: Groupings of codes. Data Collection and Analysis  Interrelated Processes: Data collection and analysis are interconnected. Important Qualitative Terms  Saturation: Point at which no new information is being discovered.  Member Validation: Participants validate the findings.  Sensitizing Concepts: Initial ideas that guide the research.  Iterative: Repeated cycles of data collection and analysis. Rich/Thick Descriptions  Detailed Understanding: Providing detailed and nuanced descriptions of the research context and findings. Week 9: Interviews Definition  An interview “quite literally… develops a view of something between (inter) people” — Brenner, 1985. Purposes of the Qualitative Interview  Understand people’s experiences and perspectives.  Elicit language forms used by people.  Gather information about things or processes that cannot be observed by other means.  Inquire about the past.  Verify information obtained from other sources (triangulation).  Member checking. Interview Structures  Structured Interview: More likely in quantitative research; very prescribed with little to no deviation in questions.  Semi-Structured Interview: List of questions or topics to be covered; deviations in question and order are normal.  Unstructured Interview: Small prompts rather than a specific schedule/guide; more conversational. Reflexivity in the Interview Process  Definition: Researchers strive to make their influence on the research explicit to themselves and their audience (Gentles et al., 2014).  Self-Awareness: Continual evaluation of subjective responses, intersubjective dynamics, and the research process itself (Finlay, 2002).  Constructing Meaning: Interviewers and interviewees construct meaning together.  Positionality Statements: Reflecting on the researcher’s own subjectivity and how it influences the study. Narratives  Storytelling: A good interview solicits the stories of the participants, giving shape to human experience and how we make meaning about our social world. Advantages of Interviews  Capture data on issues resistant to observation.  Reconstruction/interpretation of events.  Less intrusive than observational methods.  Breadth of coverage: Can include a different set of people than observational methods.  Focus: Can focus on specific phenomena. Interview Goal  Understand how participants view the world, engage in their social world, and make meaning about social phenomena. 2. Types of Qualitative Interviews Respondent Interviews  Explore how people express their own views, construe their actions, and view their social world. Ethnographic Interviews  Informal and conversational, often happening in the midst of other social actions. Informant Interviews  Interviewees who can inform researchers about the “scene.” Interviews as Eliciting Storytelling  Good qualitative interviews allow participants' stories to emerge, giving shape to human experiences and how we construct meaning about our social world. 3. Types of Interview Questions Interview Schedule  Qualitative researchers use a variety of questions to build their interview schedules:  Chronological Order  Broad to Specific (Funnel)  Specific to Broad (Inverted Funnel) Types of Questions  Introducing Questions: Help open up the interview and prompt the interviewee to begin their story.  Direct Questions: Designed to elicit a direct response, often yes or no questions.  Indirect Questions: Projective questions asking about things the interviewee does not have direct knowledge about.  Follow-up Questions: Ask for elaboration on a particular answer, can be indirect and function like back-channels.  Probing Questions: Using direct questions to follow-up.  Specifying Questions: Draw out more precise descriptions from general statements.  Structured Questions: Help transition from one interview topic to the next.  Interpreting Questions: Asking the interviewee for clarification.  Silence: Allows interviewees to gather their thoughts, prevents interruptions, and may encourage them to continue talking to fill the silence. 4. Focus Groups Definition  An interview conducted with a group of persons, capitalizing on communication as a collaborative process. Purpose  Interaction between interviewees helps uncover a deeper understanding of the topic. Focus Group Size  Typically 7-12 people.  Small enough so everyone can talk.  Large enough to ensure a diversity of opinions. Focus Group Makeup  Each group should be homogenous in terms of demographic characteristics (e.g., age, sex, ethnicity, and relevant characteristics of interest).  Members should not know each other prior to the study to avoid shared knowledge that may obscure true meanings. Focus Group or Focus Groups  Studies should include more than one group to ensure diverse perspectives and avoid atypical results.  Saturation is reached when new groups produce no new insights. Advantages  Captures real-life data.  Flexible.  High face validity: Non-researchers can understand the results.  Group dynamics bring out important but unanticipated aspects of the topic. Disadvantages  Less researcher control.  Data can be difficult to analyze.  Moderation requires special skills.  Differences between groups can be troublesome.  Groups are difficult to assemble.  Discussions must be conducted in a conducive environment. Week 10: Ethnography 1. Ethnography: Field Notes Importance of Field Notes  Human Fallibility: Researchers need to record what happens in the field because the human mind is fallible.  Direct Recordings:  Might require permission (check local laws).  Will need to be transcribed.  Field Notes:  Allow the researcher to intersperse observation, participation, and recording. Types of Field Notes  Mental Notes:  Useful when it would be inappropriate to change the setting to take a direct note.  Should be recorded at the first opportunity.  Jotted/Scratch Notes:  Brief notes to jog the memory later.  Include “little phrases, quotes, key words.”  Usually jotted down inconspicuously.  Full Field Notes:  Written as soon as possible.  Provide as much detail as possible.  The field notes ARE the data.  Begin the analytic process by bracketing off researcher thoughts and ideas.  Analytic Memos:  Written reflections on the nature of the data and the overall project.  Incorporate researcher reflexiveness.  Bridge gaps between data and interpretation. Good Field Note Habits  Journaling: Developing a habit of journaling can help maintain a record of the field.  Allocating Time: Good field notes typically take the same amount of time as was spent in the field.  Detail: More detail allows for greater analytic opportunities, but there are limits.  Analysis: Early versions of codes can emerge during the field note process.  Reflection: The field notes reflect the researcher—where do they need to be more detailed, more curious, etc. 2. Ethnography: The Field The Field  Site of Observation: Wherever the culture is happening.  Examples:  Raves  TSA lines  Prisons  US/Mexico border  Assisted Living Centers  Cruise Ships  Hostels  Schools  Social Media  Protests  Pride Parades  Workplaces  Rally to Restore Sanity  Food Trucks  Farmer’s Markets  Girls on the Run  Fire Stations  911 Dispatch Access  Preparation: Researchers must consider how they will gain access to the site of their study.  Questions to Consider:  What is the goal of the study?  Are they a member of this community?  What resources will be needed? Types of Access  Overt:  Have direct permission from group leaders/members.  Find sponsors/gatekeepers.  Participants may change behavior or keep things back because they know they are being observed.  Covert:  Conduct ethnography without informing participants.  Public Space/Undercover.  Participants may act naturally.  May be difficult to record/take notes.  Ethical questions emerge. Time  Duration: How much time will a researcher spend in the field?  Constraints:  Some events only happen at particular times.  Researchers have other responsibilities.  Limitations from funding agencies.  Limitations imposed by the group. Activity  Researcher Activities in the Field:  Observe.  Take notes.  Participate.  Conduct ethnographic interviews.  Collect textual materials.  Set up future interviews. 3. Ethnography: General Concepts Definition  Ethnography: Long-term immersion in a field site to write about cultures.  Components:  Ethno: Culture.  Graph: Something written. The Researcher  Roles:  Participant? Observer?  Positionality.  The researcher as lens/instrument.  Liminality. Types of Participants  Complete Participants:  Operate under pretense.  Fully recognized as members of the scene but not known to be acting as researchers.  Restricts some freedom.  Potential ethical issues.  Participant-As-Observer:  Acknowledges professional motives to site members.  Ongoing negotiation of interests.  Allows for naïve participation.  Observer-As-Participant:  Primarily invested in observation.  May still interact with members casually and indirectly.  Complete Observers:  Not really known to the participants at all.  Works best for public settings (e.g., crowds, public websites). Gatekeepers  Definition: Individuals who have the power to deny or grant access to a social setting.  Authority: Power and authority may be formal or informal. Participants  Sampling: Purposive sampling; participants are typically who is in the field.  Informed Consent: Not always possible.  Confidentiality: Still important.  Member Validation: Participants can contribute to the overall research design and output. Goal  Objective: Describe and interpret the observable relationships between social practices and systems of meaning based on firsthand experience and exploration of a particular cultural setting (Lindlof & Taylor). Report  Details: How many hours of fieldwork, pages of field notes, etc.  Coding Scheme: Type of coding scheme used, number of coders, and how coding and data collection were intertwined.  Findings: Typically presented as themes emerging from the research site. Week 11: Limitation of Qualitative Research 1. Evaluating Qualitative Research Key Points  Qualitative is not Quantitative: And that is okay.  Qualitative IS Empirical: It is based on observed evidence.  Qualitative is NOT Generalizable: It focuses on producing thick, contextualized descriptions of social processes. Qualitative Criteria  Support for Conclusions: Does the research process adequately provide support for the conclusions presented by the researcher(s)? Qualitative Reporting  Theoretical Consistency: Is the sample theoretically consistent? On what basis was it selected?  Major Categories: What major categories emerged?  Events, Incidents, Actions: What are the events, incidents, or actions that point to these categories?  Relationships: What are the relationships between categories? Credibility Considerations  Triangulation:  Comparison across two or more forms of evidence.  Can be done with multiple sources, multiple methods, and/or multiple researchers.  Negative Case Analysis:  New data that disconfirms previous explanations.  Must be incorporated into the data analysis either through a reworking of the category definition or creation of a new category.  Member Validation:  Returning to the participants to check their perceptions of the researchers’ findings.  Can also involve people similar to the researcher.  Often done at the end of the project but can happen at any analytic stage. 2. Challenges in Qualitative Research Time-Consuming  Field Time: Requires time in the field.  Saturation: Data collection is ideally driven by saturation, not time constraints.  Data Preparation: Can be a lengthy process.  Coding: Can be a lengthy process. Balance  Participant and Researcher Expertise: Need to balance both. Resonance  Good Writing: Good qualitative research requires good writing. Trust  Building Trust: Essential for access and representation.  Avoiding Exploitation: Important to avoid community exploitation. Plausibility  Audience-Dependent: The goal of qualitative research is ultimately audience-dependent. Overcoming Challenges  Rich Understanding: We stand to gain a rich, in-depth understanding of how humans engage in symbolic and cultural processes. 3. Empirical Criteria for Qualitative Work Concept Generation  Theoretical Process: How were concepts generated?  Observations: What observations support these concepts?  Evolution: Does the author show how the concepts evolve from the research?  Number of Concepts: How many concepts are there? Systematic Relationships  Conceptual Density: Are concepts well-developed and tightly linked?  Variation: Were concepts observed under a range of different conditions? Is all the variation of observable data accounted for by the theoretical concepts? Substantive Findings  New Information: Do the findings deliver new information?  Guidelines for Action: Do they produce guidelines for action?  Theory-Building: Are they theory-building?  Contribution: What have we gained by having this researcher do this project? Week 12: Transcending Quantitative and Qualitative Research 1. Understanding Methods Methods as Tools  Purpose: Methods are tools for understanding our social world.  Question-Driven: Questions should drive methods, not the other way around.  Analogy: “If all you have is a hammer, everything looks like a nail.” Common Goals  Underlying Goal: Both quantitative and qualitative researchers aim to deepen our knowledge of human behavior and social processes. 2. Fuzzy Divides vs. Sharp Distinctions Quantitative Research  Focus: Behavior  Approach: Deductive  Data: Numbers  Setting: Artificial Qualitative Research  Focus: Meaning  Approach: Inductive  Data: Words  Setting: Natural 3. Multi-Strategy Research Triangulation  Purpose: Use one type of research to corroborate findings from another type.  Example: Following up on survey data with qualitative interviews. Facilitation  Purpose: Use one research strategy to aid another.  Example: Using focus groups to create possible survey items. Complementarity  Purpose: Combine diverse aspects of an investigation using different strategies.  Example: Using survey methods to assess an experience and interviews to get more in-depth reactions.

Use Quizgecko on...
Browser
Browser