Document Details

Uploaded by Deleted User

Tags

quantitative research social research methods research design social science

Summary

This document is a review of quantitative research methods. It covers different approaches to social research, the nature of quantitative research, and fundamental aspects of research design. The review also offers examples and strategies for conducting quantitative studies.

Full Transcript

Week 2: research contexts and intro to quantitative Chapter 1: general research orientations: - This chapter begins by exploring the fundamental issues in social research - Emphasizing two main research approaches - QUANTITATIVE AND QUALITATIVE - Quantitative research is centered on num...

Week 2: research contexts and intro to quantitative Chapter 1: general research orientations: - This chapter begins by exploring the fundamental issues in social research - Emphasizing two main research approaches - QUANTITATIVE AND QUALITATIVE - Quantitative research is centered on numbers, measurement, and statistical analysis - Qualitative research focuses on understanding meaning and interpretation from a subjective and interpretivist perspective - A key element in social research is the relationship between theory and research - As it determines whether data collection tests or builds theories - Additionally, epistemological and ontological concerns which are known to be the nature of social reality are central to research process - The chapter encourages researchers to reflect on how values politics and practical issues influence the conduct of research Chapter goals and learning outcomes: - The chapter outlines the goal of distinguishing between deductive (theory testing) and inductive (theory building) approaches - Understanding different social science perspectives such as positivism, interpretivism, and critical approaches - It also emphasizes the importance of understanding how values, politics, and practical considerations affect the research process - This chapter provides an overview of quantitative and qualitative methods explaining how both orientations have distinct assumptions and methodologies - Quantitative research is generally deductive, focusing on hypothesis testing and statistical analysis - While qualitative research tends to be inductive, aiming to understand social realities through subjective interpretations - Despite their differences, both approaches are empirical and systematic methods for perceiving the world. - Theories in research can be categorized into middle-range theories - Which have a narrow scope like durkheim’s theory of suicide and grand theories, such as structural-functionalism or feminism - Which are broader and more abstract - Deductive and inductive reasoning are key to the research process - Deductive research begins with theory and tests hypothesis - Inductive research collects data to generate new theories. Epistemological and ontological considerations: - The chapter covers epistemology and ontology in research - Positivism advocates for objective, value-free research grounded in sensory experience - Interpretivism focuses on understanding the subjective meanings individuals assign to their actions. - Researchers using the interpretivist approach to immerse themselves in social contexts to uncover lived experiences. - This chapter also discusses objectivism and constructionism in ontology - Objectivism suggest social phenomena exist independently of individual perceptions - Constructionism holds that social reality is socially constructed through human interactions Chapter 4: the nature of quantitative research: - This chapter introduces quantitative research - Which is characterized by its systematic use of numerical data to investigate social phenomena - The quantitative approach follows deductive reasoning process: theory formulation, hypothesis testing, data collection, and analysis - Measurement plays a crucial role in ensuring the accuracy and reliability of the data. Key steps in quantitative research: 1) Theory formulation: establishing a theoretical framework 2) Hypothesis: developing testable hypotheses based on the theory 3) Research design: deciding how to collect data (like surveys or experiments) 4) Operationalization: translating abstract concepts into measurable variables 5) Data collection: gathering numerical data through structured methods 6) Data analysis: using statistical techniques to identity patterns and test hypotheses 7) Conclusions: drawing broader implication from the findings - The chapter underscores the importance of measurement, reliability, and validity in quantitative research - Reliability refers to the consistency of measurement over time or across different raters - Validity ensures that the research truly measures what it intends to measure Goals of quantitative research: - It aims to accurately measure variables - Establish causality - Generalize findings to broader populations - And ensure that results are replicable Lecture on this week: - Confidence in quantitative research findings - The idea of sampling from a population and using statistical analysis to make inferences is central - The central limit theorem explains that repeated sampling from a population eventually leads to a normal distribution where the mean becomes more predictable Variables in quantitative research: - Variables are the building blocks of quantitative research - 1) categorical variables: normal categories without an inherent order (hair color, gender) - 2) ordinal variables: ordered categories without precise differences between data points (grades) - 3) interval variables: ordered data with meaningful difference but no true zero (temperature in celsius) - 4) ratio variables: ordered data with meaningful differences and a true zero (height and weight) Conceptualization: - The process of refining abstract concepts into measurable variables - This is especially important for complex concepts such as communication apprehension - Which has cognitive, physical and behavioral dimensions Hypotheses and research questions: - Research questions explore relationships between variables - Often without knowing exactly how the variable are related - Hypotheses propose specific relationships between variables - And they can be tested through quantitative research Connection between lecture and chapters: - Share a focus on the foundational aspects of quantitative research - Emphasizing its reliance on positivism and the importance of objectivity and measurement - Both address the process of conceptualizing and operationalizing variables - Ensuring reliability and validity - And using clear definition to refine abstract concepts - The lectures discussion on variable types and sampling parallels the chapter’s explanation on the quantitative research process including hypothesis testing and data collection - Both highlight the limitations of quantitative methods, such as oversimplification and biases, while stressing reflexivity and the need for critical interpretations - Examples in the lecture like wordle, align with the chapters use of practical illustrations to connect theory with real-world applications Week 3: survey design Chapter 5: survey research - Survey design involves selecting the most appropriate methods for collecting data, including structured interviews and self-administered questionnaires. There are 2 survey formats: - Interviews: they provide greater control and allow for probing but can introduce interviewer bias - Questionnaire: allow anonymity and standardization but lack interaction with respondents Types of questions: - Open-ended questions: allow respondents to answer in their own words. - Useful for exploratory research or uncovering unexpected insights - Challenging to code and analyze for large-scale quantitative studies - Close-ended questions: provide predefined options for respondents. - Easier to standardize and analyze but restrict the depth of responses - Reduce variability and make data comparison more straightforward Challenges in question design: - Avoid ambiguous wording and double-barreled questions - Maintain brevity to sustain respondent focus - Place sensitive questions later to build rapport - Consider response patterns and social desirability biases Survey administration: - Telephone interviews: quick and cost-effective but lack depth - Face-to-face interviews: richer data but more resource-intensive - Online surveys: convenient but face engagement issues Error reduction: - Pre-test questions to minimize recall bias and ambiguity - Use vignettes for honest responses on sensitive topics - Consider secondary analysis, acknowledging potential limitations in existing data Chapter 7: quantitative sampling - Chapter 7 shifts focus to quantitative sampling techniques, crucial for selecting representative samples that allow researchers to generalize findings to a larger population. - Sampling involves selecting a subset of the population for study, as it is usually impractical to study the entire group. - Using proper sampling techniques ensures that inferences made from the sample can be generalized to the broader population, minimizing bias and enhancing the validity of the findings. - One of the most important techniques in quantitative research is probability sampling. Probability sampling: - Involves random selection to give every member of the population a known and equal chance of being included in the sample. Types of probability sampling include: - Simple random sampling: where each individual has an equal chance of being chosen - Systematic sampling: where every kth individual from a list is selected. - Stratified sampling: where the population is divided into subgroups based on characteristics like age or gender, and random samples are taken from each subgroup. - Multi-stage cluster sampling takes a broader approach, randomly selecting groups first, such as schools or neighborhoods, and then randomly selecting individuals within those groups. - In contrast, non-probability sampling methods are less rigorous and do not involve random selection. - These techniques are often used when probability sampling is impractical. - Convenience sampling relies on participants who are easiest to access, but this method can introduce significant biases, such as non-representative samples or self-selection bias. - Snowball sampling is useful for hard-to-reach populations, where initial participants refer others to join the study. - While this can help access otherwise difficult-to-reach groups, it also creates dependencies between participants and may introduce bias. - Lastly, quota sampling involves selecting participants non-randomly based on specific quotas that reflect certain population characteristics, like gender or age, but it can still lead to non-randomness in other characteristics. - Sampling challenges are also discussed, such as the potential for bias when random sampling is not used or when sampling frames (the list of individuals from which samples are drawn) are incomplete or outdated. - Furthermore, sampling error can occur if the sample doesn’t accurately reflect the characteristics of the population. - The chapter highlights the importance of selecting an appropriate sample size. - Larger sample sizes generally improve the reliability and accuracy of the findings but also come with increased costs and resource demands. - Additionally, higher response rates are emphasized as crucial to ensuring that the sample accurately represents the population, and strategies such as follow-up surveys or incentives may be necessary to improve response rates. Lecture on this week: - Lecture 3 introduces fundamental concepts regarding populations and sampling. - A population is defined as the entire group that a researcher wishes to study, such as all students at a particular university or all residents of a country. - A sampling element is a single case within the population, such as an individual participant in a survey. - Given that it’s often impractical to study every member of a population, researchers select a sample, which is a subset that is representative of the larger group. - This concept connects directly to Chapter 7’s discussion of probability sampling, where the goal is to select a sample that can accurately represent the broader population. - The lecture emphasizes the importance of defining the population clearly. - A study on married couples, for example, should not include unmarried individuals, and a study focusing on teenagers should avoid including other age groups. - The selection of a representative sample ensures that findings can be generalized to the population. - Probability sampling methods, such as simple random sampling, ensure that each member of the population has an equal chance of being selected, reducing bias and improving the generalizability of the study's results. - The lecture also touches on the practical limitations of sampling, where, in many cases, non-probability sampling methods, such as convenience sampling, are used due to time or resource constraints, though these come with the risk of bias. - In discussing non-probability sampling, the lecture highlights the challenges and trade-offs involved. - While these methods may be more practical, they are less reliable in terms of representativeness. - This connects with Chapter 7’s exploration of different non-probability methods, such as snowball sampling and quota sampling, where participants are selected based on availability or specific demographic criteria. - The lecture reinforces the importance of careful consideration of bias when using these methods. Connection between lecture and chapters: - Both Chapter 5 and Chapter 7, as well as the lecture, emphasize the importance of careful survey and sampling design in ensuring reliable and generalizable findings. - Chapter 5 discusses various survey formats, including the pros and cons of interviews versus questionnaires - While Chapter 7 focuses on the methodologies of sampling that ensure a representative sample. - The lecture connects these ideas by explaining how to define the population clearly and select a representative sample, underscoring the need for randomness in probability sampling methods to minimize bias. - Each section highlights challenges in survey design, such as response bias, question clarity, and sampling errors, and stresses the importance of careful planning, pre-testing, and minimizing biases to ensure the accuracy and relevance of research results. Week 4: Experimental Design Chapter 2: research designs - Chapter 2 focuses on the strategic choices researchers make when selecting their specific research design - It explains that a research design is a framework guiding the process of data collection and analysis. - Which is crucial because the design chosen depends on the research goals and the types of questions being asked. - For example, researchers must decide whether their goal is to identify causal relationships, understand how phenomena evolve over time, or explore the meanings attached to social actions. - The chapter makes a distinction between nomothetic explanations - Nomothetic explanations: Aim to establish generalizable principles (common in quantitative research) - Idiographic explanations: provide rich, detailed descriptions of specific cases (common in qualitative research). The choice between these approaches heavily influences the research design. - One key research design discussed in the chapter is quasi-experiments - Which share some characteristics with true experiments but lack random assignment of participants to experimental and control groups - Quasi-experiments are often used in real-world settings - Such as in evaluations of policy changes or organizational innovations - For example, researchers might study the effects of an earthquake on civic pride by comparing an effected city with an unaffected one - While these types of studies can offer external validity (i.e., they examine real world conditions) - They are also limited by challenges like pre-existing differences between the groups being compared. - Cross-sectional designs: involve collecting data at a single point in time - Cross-sectional designs is useful for identifying relationships between variables but cannot prove causality - For example: a study might show that people with higher education tend to have higher incomes - But it cannot determine whether education causes higher income or if other factors are at play. - Cross-sectional studies are beneficial for understanding current social conditions but have limitations in establishing causal relationships. - Longitudinal designs: which follow participants over time and allow researchers to track changes in variables, making it easier to establish causal relationships. There are 2 main types of longitudinal studies: - Panel studies: which track the same individuals or groups over time - Cohort studies: which focus on a specific group with a shared experience (e.g., individuals born in the same year) Lecture on this week: - Talks about reliability and validity in the context of experimental research Reliability: is about consistency whether the result of a study would be the same if repeated - This concept is crucial in ensuring that research measurements are stable over time. - Test-retest reliability: which ensures that results are consistent over time - Internal consistency: which examines whether items on a scale are correlated - Item-total reliability: measures whether individual items on a scale correlate with the total score - Reliability coefficients (such as cronbach’s alpha - so the weird a): quantity the degree of consistency. A higher coefficient indicates higher reliability, with values close to 1.00 considered ideal Validity: refers to how accurately a study measures what it intends to measure - Content validity: ensures that a measure covers all relevant aspects of a concept - Criterion validity: ensures that the measure aligns with other established measures - Predictive validity (whether a measure can predict future behavior) - Concurrent validity (whether scores on a measure align with scores from similar measures - Construct validity: examines whether a measure fits within the theoretical framework it is supposed to represent - Convergent validity: (where the measure correlates with related variables) - Discriminant validity: (where it does not correlate with unrelated variables) The lecture then moves on to potential threats to experimental validity Internal validity concerns whether the results of an experiment can be attributed to the manipulation of the independent variable, without interference from other factors - Threats to internal validity include the placebo effect - Placebo effect: where participants change their behavior simply because they know they are part of an experiment - Hawthorne effect: where participants alter their behavior because they are being observed - Other threats include maturation (natural changes over time) - And mortality: where participants drop out of the study, potentially skewing the results - The lecture also discusses observer bias: where researcher may unintentionally behave differently toward patients, affecting the results - Additionally, researcher attribute effects: highlight how characteristics of the researchers (such as gender, race, or demeanor) can influence the outcomes of the study External validity: - Deals with the generalizability of the study results beyond the simple Threats to external validity include: - Testing interaction: where the artificial setting of an experiment may affect how participants behave - Selection interaction: where the sample may differ from the broader population - History interaction: refers to how the time period of the study could influence its results To address these issues, researchers can use creative procedures or field experiments, which take place in a more natural setting to increase ecological validity. Connection between lecture and chapters: - The Chapter and Lecture are connected in how they emphasize the importance of careful research design to ensure the reliability and validity of findings. - The chapter outlines various research designs, such as quasi-experiments, cross-sectional, and longitudinal studies, and discusses their strengths and weaknesses. - These designs are influenced by issues of validity and reliability, as highlighted in the lecture. - The lecture provides a deeper understanding of how reliability (consistency) and validity (accuracy) affect the interpretation of experimental results. - Both the chapter and the lecture stress that a sound research design not only helps in answering research questions but also ensures that the conclusions drawn are valid and reliable. - Together, they demonstrate that the quality of data collection and analysis is paramount to making informed and accurate conclusions in social research. Week 6: ethics - Ethics are the backbone of social research - Ensuring that studies are conducted responsibly and participants are protected - Both the reading and lecture emphasize core ethical principles like respect for person, concern for welfare, and justice - These principles guide researchers in respecting participant autonomy, balancing risks and benefits, and ensuring equity in research burden and benefits - Central to these principles is the concept of informed consent - Participants must voluntarily agree to participate with a full understanding of the study’s nature, risks, and benefits. - Researchers must handle data securely to protect privacy and confidentiality, using measures like pseudonyms and ensuring sensitive information remains secure - The lecture expands on these principles in the contexts of internet research, highlighting challenges unique to online environments - For instance, while a lot of data can be scraped or analyzed - Questions arise about whether such use is ethical - Studies show participants often don't read terms of service or understand how their data is used - Raising concerns about whether informed consent is genuinely obtained - The association of internet researchers (AoIR) guidelines address these complexities, urging researchers to consider platform expectations, minimize data collection, and comply with legal and ethical standards - Direct quotes from online content, even if they are public, are identifiable and require informed consent wherever possible. - Ethical dilemmas, such as those involving deception, are discussed in both the chapter and lecture - While the milgram obedience study, the stanford prison experiment, and the tuskegee syphilis study are infamous examples of unethical research - They underscore why institutional oversight like Research ethics boards (REBs) is vital - REBs evaluate studies to ensure participant welfare - Balancing risks and benefits - And often impose stricter scrutiny for qualitative research where boundaries between the research and participant can blur - Beyond academia, ethics are equally important in fields like market research, journalism, and user experience design - Even in unregulated environments, the mantra remains: just because you can, doesn't mean you should. - Balancing knowledge acquisition with ethical considerations is a universal priority - Ensuring research contributes to understanding without compromising participants dignity or well-being Week 7: Introduction to Qualitative research Chapter 9: the nature of qualitative research - Chapter 9 explores qualitative research as a method focused on capturing the complexity of social life through participants perspectives - It prioritizes words, images, and the dynamic nature of social interactions rather than fixed structures or numerical analysis - This inductive approach relies on iterative processes where general questions guide the selection of sites and participants, and data collection informs ongoing refinement of ideas - Methods such as ethnography, interviews, focus groups, and document analysis are central to this approach - Participatory action research is also highlighted - Emphasizing collaboration with those affected by a social issue - The evaluation of qualitative research uses criteria like credibility, transferability, dependability, and confirmability, which diverge from the traditional measures of reliability and generalizability - Researchers aim to understand social life as an evolving process, capturing its nuances and the meanings participants assign to their experiences - Despite criticisms of subjectivity and replication challenges, qualitative research is valued for its ability to provide deep, context-specific insights that illuminate the lived realities of its subjects Chapter 12: content analysis - Chapter 12 delves into content analysis - Emphasizing its dual nature as both a qualitative and quantitative method for interpreting communication - Coding is presented as a vital process that organizes data into meaningful categories, enabling researchers to uncover patterns and themes - Quantitative coding uses predefined frameworks to ensure replicability - Qualitative coding allows for emergent categories, evolving as the research progresses - Ethnographic content analysis exemplifies this flexibility - Emphasizing the iterative refinement of codes to capture deeper meanings - This chapter also talks about advanced techniques like semiotics, which interprets signs and symbols by examining both their literal and implied meanings and hermeneutics - Which contextualizes texts within their social and historical environments - Critical discourses analysis adds another dimension by exploring how language reflects and shapes societal power dynamics, revealing how media influence public understanding - Evaluating content analysis involves assessing authenticity, credibility, representativeness, and meaning to ensure robust findings - While challenges such as researcher bias and subjective interpretation persist, content analysis remind an indispensable method for examining how communication shapes and reflects societal norms Lecture on this week: - The lecture focuses on what makes qualitative research rigorous - It emphasizes the need for a solid theoretical framework supported by evidence - The concepts should be clearly connected and flexible enough to cover different situation - The goal of qualitative research is to build theory - Offering convincing and meaningful views on social issues - A big part of the lecture is about coding, which is breaking down data into smaller parts for easier analysis - Open coding starts by creating basic labels - Axial coding brings these labels together into bigger themes - Selective coding refines these themes by looking at different aspects - The lecture highlights how coding is an ongoing process - Guided by general ideas (sensitizing concepts) and codebooks to keep everything consistent - This helps researchers build clear, evidence-based stories from their data Connection between lecture and chapters: - The lecture and chapters form a cohesive narrative about the principles and practices of qualitative research. - Chapter 9 sets the theoretical foundation, emphasizing the interpretive, iterative, and participant-focused nature of qualitative inquiry. - This groundwork is expanded in Chapter 12, where content analysis is presented as a specific method that incorporates coding to bridge qualitative and quantitative approaches. - The lecture ties these elements together by providing a detailed explanation of coding processes, from open coding to selective coding, and by stressing the importance of rigorous evaluation through conceptual density, variation, and substantive contributions. - Coding serves as the connective thread across all three sources, highlighting its role in organizing and interpreting data within an iterative framework. - The emphasis on emergent patterns and the evolution of ideas in the lecture aligns with the ethnographic and flexible coding approaches described in Chapter 12. - Meanwhile, the depth and contextual focus discussed in Chapter 9 are reinforced through the lecture’s call for rigorous, evidence-based interpretations. - Together, these materials provide a unified perspective on how qualitative research methods uncover the nuanced realities of social life. Week 8: Interviewing techniques Chapter 11: Interviewing in Qualitative Research - Chapter 11 focuses on interviewing techniques used in qualitative research - Emphasizing the flexibility and value of unstructured and semi-cultured interviews - The chapter underscores the significance of qualitative interviews in capturing participants perspectives - As opposed to the rigid structure of quantitative interviews - It explores the differences between structured and qualitative interviews - With unstructured ones resembling conversations and semi-structured ones guided by an interview schedule but still allowing flexibility in follow-up questions - The use of interview guides, the time-consuming process of transcribing, and the role of focus groups are also discussed. - Focus groups offer an alternative to individual interviews by encouraging group interaction, which can reveal deeper insights - However, challenges arise in managing group dynamics, selecting participants, and ensuring effective transcription due to multiple speakers - Finally, the chapter compares qualitative interviews with ethnography - Noting that while interviews provide in-depth data, ethnography offers broader contextual insights Lecture on this week: - Lecture 8 delves into the purpose and types of qualitative interviews - Emphasizing their role in developing a deeper understanding of participants experiences - It introduces the concept of qualitative interviews as a co-constructed process - Where meaning emerges through the interaction between the interviewer and the interviewee - The lecture stresses the significance of qualitative interviews in eliciting specific language forms, gathering information about things that cannot be observed, and verifying data from other sources - Member checking, or returning to participants to ensure accuracy in interpretations, is also highlighted as an important aspect of qualitative interviewing - The lecture further differentiates between various types of interviews, such as respondent interviews, ethnographic interviews, and informant interviews, each serving distinct purposes in understanding social phenomena. - Additionally the lecture explores the use of interview schedules - And different question types - Such as introducing, direct, and indirect questions, to guide the interview process Connection between lecture and chapters: - They both emphasize the flexibility and depth that qualitative interviews bring to research - They highlight the importance of understanding the participants perspectives and experiences in their own terms - Rather than imposing predetermined categories - Both resources also discuss the use of interview guides, with the chapter focusing on their role in semi-structured interviews - And the lecture detailing specific question types to elicit rich data - Both highlight the value of focus groups as a tool for gathering collective insights - Although the chapter presents additional challenges related to focus group dynamics and transcription - The overall connection between the chapter and lecture lies in their shared focus on the significance and qualitative interviewing as a tool for gathering deep and context rich data Week 9: Ethnographic Research Chapter 10: Ethnography and participant observation - Chapter 10 delves into ethnography and participant observation as key qualitative research methods - These approaches emphasize immersive engagement with a group or setting over time - Enabling researchers to explore cultural, social, and organizational dynamics - Ethnographers balance observation and interaction - Adapting their roles to fit the context and goals of their study - A critical step in ethnographic work is gaining access which can range from open public spaces to restricted, closed organizations - Strategies for access include building trust, securing sponsorship and negotiating terms - Though covert approaches may bypass these formalities - However, covert methods introduce ethical dilemmas such as deception and lack of consent - Posing risks to both participants and researchers - Ethnographers assume varied roles in the field - From complete participants who blend seamlessly into the group to complete observers who maintain detachment - Intermediate roles, such as participant-as-observer and observer-as-participant balance interaction and observation differently - While covert participant may provide unfiltered insights, it complicates ethical obligations and data collection - Data collection relies on field notes, which record detailed observations, reflections, and initial interpretations - These are often supported by audio recordings, photographs, and analytic memos - Though each method demands careful ethical consideration, particularly around privacy and consent - Analytics memos are invaluable for connecting raw data to theoretical insights, shaping the research narrative - The chapter also talks about specialized approaches like visual ethnography which incorporates images as memory aids and data. - Institutional ethnography examines power dynamics in institutional structures - Sampling in ethnography is typically purposive or theoretical, focusing on rich data sources - While snowball sampling identifies informants through networks. Lecture on this week: - The lecture focuses on ethnography as a study of culture through immersive observation and extensive data collection. - It highlights the variety of fields suitable for ethnographic research, such as raves, workplaces, and protests, emphasizing the diverse contexts in which cultural phenomena occur. - Gaining access is a central theme, as researchers must navigate overt and covert approaches. - Overt access involves direct permission, often facilitated by sponsors or gatekeepers, but may alter participants' behavior due to awareness of observation. - Covert access, on the other hand, allows for more natural interactions but raises ethical concerns and logistical challenges in data collection. - Time and activity in the field are also emphasized. - While extended immersion is ideal, practical constraints like event schedules, funding, and participant limitations often shape the duration of fieldwork. - Researchers engage in observation, participation, ethnographic interviews, and textual material collection, ensuring a multifaceted approach to understanding their subject. - Field notes are a cornerstone of the lecture, described as vital records of observation and participation. - They range from mental and jotted notes to full, detailed field notes that serve as the primary data source. - The lecture stresses good field note habits, such as journaling, allocating time for detailed entries, and maintaining reflexivity. - Analytic memos bridge the gap between raw data and interpretation, encouraging researchers to refine their analytical framework. - The lecture concludes by noting that field notes not only capture the research process but also reflect the researcher’s evolving understanding, underscoring their central role in ethnographic inquiry. Connection between lecture and chapters: - Both the chapter and lecture offer complementary perspectives on ethnography and participant observation, emphasizing the depth and challenges of the method. - The chapter provides a theoretical foundation, detailing roles, sampling strategies, and ethical considerations, while the lecture offers practical insights, focusing on field sites, data collection techniques, and the realities of access. - The concept of access is a shared focus, with both sources discussing overt and covert approaches and the associated ethical implications. - Similarly, the importance of field notes is underscored in both, highlighting their role in capturing observations and shaping analysis. - While the chapter introduces analytic memos and their theoretical significance, the lecture expands on practical habits for creating effective field notes. - Ethical concerns permeate both discussions, particularly in covert research. - The chapter offers a more extensive theoretical framework for understanding these challenges, whereas the lecture contextualizes them with practical examples. - Together, they provide a comprehensive view of ethnography, blending theoretical depth with applied guidance to prepare researchers for fieldwork. Week 10: Limitations in Qualitative Research Lecture on this week: - Week 10 focused on the limitations inherent in qualitative research and how they shape the evaluation and execution of studies. - Unlike quantitative research, qualitative approaches are not generalizable. - And that distinction is entirely acceptable, as qualitative research is grounded in empirical evidence rather than numerical data. - When evaluating qualitative studies, it’s crucial to assess whether the research process sufficiently supports the conclusions. - Additionally, reporting should consider whether the sample aligns with theoretical consistency, the basis for its selection, the emergence of major categories, and the relationships between events, incidents, or actions pointing to those categories. - Credibility in qualitative research often relies on triangulation, which involves comparing multiple sources, methods, or researchers to validate findings. - Negative case analysis, where new data challenges prior explanations, requires revisiting and possibly redefining categories. - Member validation is another important consideration, typically involving participants or similar individuals reviewing the findings to ensure accuracy and resonance. - This process can occur at the project's conclusion or at any stage during analysis. - Qualitative research presents significant challenges. - It is inherently time-consuming, demanding extended periods in the field and allowing data collection to be guided by saturation rather than time constraints. - The processes of preparing and coding data can also take considerable time. - Balancing participant insights with the researcher’s expertise is essential - The researcher must bring new perspectives to the study for it to be meaningful. - Effective qualitative research also depends on strong writing skills to ensure resonance with the audience. - Trust plays a pivotal role, requiring researchers to integrate into the community to build access and avoid exploitation while fostering authentic representation. - Finally, plausibility is key, as the ultimate goal of qualitative research is to present findings that resonate with the intended audience. Week 11: Transcending the Quantitative/Qualitative Divide Chapter 15: Conducting a Research Project - Week 11 explores overcoming the divide between quantitative and qualitative research, with Chapter 15 of Social Research Methods offering a guide to conducting research projects. - Key elements include: forming clear research questions, managing time and resources, integrating literature, and writing effectively. - Research questions are central to focus and evolve as the study progresses, developed through personal interests, advisor input, and literature reviews. - Time management involves early planning, detailed timetables, and using tools like SPSS or NVivo for analysis. - Literature reviews identify knowledge gaps, refine questions, and justify the study, requiring critical analysis of controversies and theoretical frameworks. - Ethical considerations, such as participant safety and risk assessments, are vital, and sampling must align with research goals. - Writing begins early, iteratively refining sections like the introduction, literature review, methods, results, and discussion. - Effective writing builds a coherent narrative, avoids jargon, and highlights contributions. - Safety precautions, including risk assessments and public interviews, ensure researcher and participant protection in challenging environments

Use Quizgecko on...
Browser
Browser