Summary of Qualitative Research Methods
Document Details

Uploaded by DiplomaticBluebell2815
Tags
Summary
This document offers a concise overview of qualitative research methods. It explains key themes such as the difference between qualitative and quantitative approaches, philosophical assumptions, core skills, and ensuring trustworthiness. The summary includes lectures on grounded theory, coding techniques (including the Gioia Method), and case selection.
Full Transcript
Summary – Qualitative Methods Lectures Lecture 1: Introduction to Qualitative Research Key Themes: Difference Between Qualitative and Quantitative Research o Quantitative Research: Rooted in functionalism; aims for theory testing using structured data collecti...
Summary – Qualitative Methods Lectures Lecture 1: Introduction to Qualitative Research Key Themes: Difference Between Qualitative and Quantitative Research o Quantitative Research: Rooted in functionalism; aims for theory testing using structured data collection (e.g., surveys, experiments). o Qualitative Research: Rooted in interpretivism; focuses on theory development through rich, detailed understanding of human experiences. Philosophical Assumptions o Functionalist Paradigm (Quantitative): ▪ Replication and Objectivity: Seeks generalizability by identifying cause-effect relationships. ▪ Theory development happens through hypothesis testing. o Interpretivist Paradigm (Qualitative): ▪ Multiple Realities: Knowledge is constructed based on participant experiences. ▪ The researcher is an instrument in understanding meaning. Core Skills for Qualitative Research o Observation: Systematically recording behaviors and contexts. o Interviewing: Conducting open-ended, structured, or semi-structured discussions. o Coding and Analysis: Identifying key concepts, themes, and relationships. o Building Connections to Theory: Developing insights that contribute to broader academic discussions. Ensuring Trustworthiness in Qualitative Research o Descriptive Validity: Accuracy in reporting observations. o Interpretive Validity: Understanding participants' meaning accurately. o Theoretical Validity: Connecting findings to broader concepts. o Generalizability (Transferability): Can findings be applied to other contexts? Lecture 2: Grounded Theory & Qualitative Interviewing Key Themes: Grounded Theory (Strauss & Corbin, 1998) o An inductive method that builds theory from data rather than testing existing hypotheses. o Uses systematic coding (open, axial, selective coding) to generate insights. Data Collection in Qualitative Research o Types of Data: ▪ Interviews, observations, documents, archival records. o Types of Research Designs: ▪ Single Case Study: Deep understanding of one entity. ▪ Multiple Case Study: Comparison across cases. ▪ Process Study: Longitudinal analysis of changes over time. Interviewing Techniques o Creating Open-Ended Questions: Avoiding leading, yes/no, or biased questions. o Allowing Participants to Lead: Letting them introduce topics they find important. o Adjusting Questions Based on Responses: Using follow-up questions to explore deeper insights. o Ethical Considerations in Interviewing: ▪ Obtaining informed consent. ▪ Respecting participant privacy and emotional well-being. Reflexivity in Qualitative Research o Acknowledging researcher bias and influence on data collection. o Strategies to reduce bias: ▪ Memo-writing ▪ Member-checking (validating interpretations with participants). Lecture 3: Coding and the Gioia Method Key Themes: What is Coding? o The process of systematically categorizing data to identify patterns, themes, and relationships. o Coding helps in reducing large amounts of qualitative data into manageable insights. Types of Coding: o Open Coding: Identifying initial themes and categories. o Axial Coding: Connecting different categories to understand relationships. o Selective Coding: Developing a central storyline or theoretical model. The Gioia Method (Gioia et al., 2013) o A structured qualitative data analysis technique that balances participant perspectives (first-order concepts) with researcher interpretations (second-order themes). o Key Steps: 1. Data Collection: Multiple rounds of interviews and document analysis. 2. First-Order Coding: Identifying participants’ raw statements. 3. Second-Order Themes: Developing broader concepts based on patterns. 4. Data Structure: Organizing themes into a visual representation. 5. Theory Building: Explaining the relationships between concepts. Case Selection in Qualitative Research o Critical Cases: Test or refine an existing theory. o Extreme Cases: Capture unusual or unexpected behaviors. o Revelatory Cases: Provide access to new insights on an underexplored topic. Lecture 4: The Eisenhardt Method & Case Study Research Key Themes: The Eisenhardt Method (Eisenhardt, 1989) o A rigorous case study method that blends theoretical sampling, constant comparison, and cross-case analysis. o Focuses on theory building rather than simply describing phenomena. Key Features of the Eisenhardt Method o Constant Comparison: Iterating between data and emerging theory. o Replication Logic: Treating each case as a separate experiment. o Cross-Case Analysis: Identifying patterns across different settings. Case Selection Strategies o Matched Pairs Design: Selecting cases with similar characteristics but different outcomes. o Polar Types: Comparing extreme success and failure cases. o Common Process Design: Studying the same process across different settings. Developing Theoretical Arguments o Moving beyond description to explain why patterns occur. o Identifying boundary conditions (when the theory applies and when it doesn’t). Lecture 5: Process Studies & Theory Development Key Themes: What Are Process Studies? (Langley, 1999) o Focus on how events unfold over time. o Used for studying organizational change, decision-making, and strategy formation. Key Strategies in Process Research o Temporal Bracketing: Dividing data into meaningful time periods. o Visual Mapping: Creating timelines of events to identify patterns. o Turning Nouns into Verbs: Emphasizing processes (e.g., “strategizing” instead of “strategy”). Developing Theory from Data o Generating Concepts: Identifying surprising findings or gaps in literature. o Refining Categories: Splitting or merging concepts for clarity. o Building Theoretical Models: Creating abstract representations of observed phenomena. Lecture 6: Writing the Method and Findings Sections Key Themes: How to Write the Methods Section o Single Case Study Design (Gioia & Langley): ▪ Explaining how data was collected, coded, and analyzed. ▪ Using tables, quotes, and thematic structures to present findings. o Multiple Case Study Design (Eisenhardt): ▪ Describing how cases were selected, analyzed, and compared. How to Structure the Findings Section o Periodization: Organizing findings into meaningful phases. o Using Data Tables: Presenting first-order and second-order codes. o Linking Findings to Theory: Demonstrating how the data contributes to existing research. Presenting Theoretical Contributions o Clearly stating the research puzzle. o Comparing findings with alternative theoretical perspectives. o Highlighting practical and managerial implications. Writing Process Tips o Iterative Writing: Writing multiple drafts to refine arguments. o Engaging with Literature: Strengthening findings by linking them to past research. o Avoiding Common Pitfalls: Ensuring clarity, coherence, and logical flow in writing. Workshops Workshop 1: Ethnographic Observation (Spradley’s Method) Key Concepts & Definitions Ethnographic Observation: A qualitative research method that involves systematically observing people in their natural settings to understand behaviors, interactions, and cultural norms. James Spradley’s Domains of Descriptive Observation: 1. Space – The physical layout of the setting. 2. Actors – The people involved in the setting. 3. Activities – The specific actions people are engaged in. 4. Objects – Physical items in the setting. 5. Acts – Small individual behaviors. 6. Events – Larger activities composed of multiple acts. 7. Time – Sequencing of events. 8. Goals – What participants aim to accomplish. 9. Feelings – Emotional expressions observed. Findings & Practical Insights Observation should be descriptive rather than interpretative. Pay attention to patterns of interaction and the social norms that shape behaviors. Writing an observation report should include objective details and avoid assumptions. Effective observations provide rich contextual data that help frame further research. Workshop 2: Conversational Interviewing Key Concepts & Definitions Conversational Interviewing: A flexible, open-ended interviewing technique that allows participants to express themselves without rigid structure. Interview Protocol: A guide used to structure an interview while maintaining flexibility. Open-Ended Questions: Questions that allow for detailed responses, rather than simple yes/no answers. Reflexivity: The process of being aware of one’s own biases and preconceptions while conducting research. Best Practices for Interviewing Build rapport with the interviewee to encourage honest responses. Start with broad, open-ended questions and let participants lead the conversation. Avoid leading questions that may bias responses. Use probing follow-up questions to gain deeper insights (e.g., "Can you elaborate on that?"). Record and transcribe interviews for accurate data collection. Common Challenges in Interviewing Social desirability bias: Participants may give answers they think the interviewer wants to hear. Memory recall issues: Participants may struggle to accurately remember past events. Interviewer bias: The interviewer’s own perspective may influence the way questions are framed. Workshop 3: Coding Qualitative Data Key Concepts & Definitions Coding: The process of categorizing qualitative data to identify themes and patterns. Open Coding: The first step of analysis, where key phrases and ideas are labeled. Axial Coding: Identifying relationships between different codes. Selective Coding: Developing a central narrative or theory based on patterns. How to Code Data 1. Read the data carefully and highlight important phrases. 2. Assign descriptive labels to words or sentences that capture key ideas. 3. Look for patterns and recurring concepts. 4. Refine categories by grouping similar codes together. Key Questions to Ask During Coding What is happening in this data? What stands out as significant? How do different codes relate to each other? What concepts are emerging that might contribute to theory? Common Mistakes in Coding Over-coding: Assigning too many different codes, making analysis complicated. Forcing categories: Trying to fit data into predefined concepts rather than letting themes emerge naturally. Ignoring contradictions: Valuable insights often come from anomalies in the data. Workshop 4: Coding with AI (Atlas.ti & ChatGPT) Key Concepts & Definitions AI-Assisted Coding: Using machine learning tools like Atlas.ti and ChatGPT to analyze qualitative data. Manual vs. AI Coding: While AI can speed up the coding process, it lacks the contextual understanding that human researchers provide. Data Triangulation: Comparing AI-generated codes with manual coding to ensure accuracy. Using AI in Coding 1. Start with manual coding to gain first-hand insights. 2. Use AI tools (Atlas.ti, ChatGPT) only after completing initial manual coding. 3. Compare AI-generated codes with manual ones. 4. Validate AI-generated codes by checking for relevance and consistency. Limitations of AI in Qualitative Research AI lacks deep contextual understanding and may misinterpret complex themes. AI over-generates single codes, leading to redundant categories. Ethical concerns about data privacy and bias in AI algorithms. Best Practices Use AI as a support tool rather than relying on it entirely. Disclose AI usage in research write-ups. Manually review AI-generated outputs to refine categories. Workshop 5: From Data to Theory Key Concepts & Definitions Theoretical Saturation: The point at which new data no longer generates new insights. Conceptual Model: A visual representation of key themes and their relationships. Theoretical Sampling: Selecting data sources based on their potential to refine theory. Steps to Move from Data to Theory 1. Generate Initial Codes: Identify interesting patterns in the data. 2. Refine and Organize Codes: Merge similar categories, drop irrelevant ones. 3. Stabilize Codes: Compare codes with existing theories. 4. Develop a Conceptual Model: o Identify second-order categories (higher-level themes). o Create visual diagrams to represent relationships. o Label connections between concepts to explain causal mechanisms. Common Mistakes in Theory Development Jumping to conclusions too early without fully exploring patterns. Forcing data to fit existing theories rather than letting new insights emerge. Ignoring contradictions that might refine or challenge theories. Mintzberg’s Advice on Theory Development Good theory comes from suspending rigid scientific correctness and allowing for creative exploration. Workshop 6: Writing the Methods Section Key Components of a Methods Section 1. Research Setting: o Explain where and how the study was conducted. o Justify why this setting is relevant. 2. Data Collection: o Describe how data was gathered (e.g., interviews, observations). o Explain the sampling strategy (Who was interviewed? Why?). 3. Data Analysis: o Detail coding procedures: ▪ First-order coding (open coding) ▪ Second-order themes (focused coding) o Discuss AI tools used (if applicable). 4. Linking to Theory: o Show how the emerging categories relate to existing research. o Justify why certain relationships between concepts were identified. 5. Findings Section: o Organize findings by second-order themes. o Support each theme with direct quotations from data. o Provide a conceptual model to illustrate relationships. Common Mistakes in Writing Methods Being vague about data collection and analysis. Not clearly defining codes and categories. Failing to justify theoretical connections. Final Thoughts on Writing Methods should be transparent and replicable. Avoid overloading with technical jargon. Provide real examples to support descriptions. Final Key Takeaways for the Exam 1. Understand key qualitative research methods (Observation, Interviewing, Coding). 2. Be able to explain different coding strategies (Open, Axial, Selective). 3. Know the strengths and limitations of AI in qualitative research. 4. Be able to describe the process of moving from data to theory. 5. Understand how to structure and write a qualitative methods section. 6. Practice identifying theoretical contributions from qualitative findings. Mandatory Course Readings - On the methodology: Shah, S. K., & Corley, K. G. (2006). "Building Better Theory by Bridging the Quantitative–Qualitative Divide." Journal of Management Studies, 43(8), 1821-1835. Key Argument of the Paper: The article argues that qualitative research plays a crucial role in theory development, while quantitative research primarily focuses on theory testing. To build better theories, researchers should combine both approaches. The authors emphasize the need for methodological pluralism in management research and propose using grounded theory as a key tool for qualitative analysis. 1. The Role of Theory in Research Theory vs. Data: Data alone does not generate theory—researchers do (Mintzberg, 1979). Theory-building vs. Theory-testing: o Quantitative research is well-suited for testing hypotheses based on existing theory. o Qualitative research helps in developing new theoretical insights by exploring rich, contextualized data. 2. The Divide Between Quantitative and Qualitative Research Functionalist Paradigm (Quantitative Approach): o Assumes objective reality that can be measured. o Emphasizes replication, generalizability, and causality. o Uses surveys, experiments, and statistical models. Interpretivist Paradigm (Qualitative Approach): o Assumes multiple social realities shaped by individuals. o Emphasizes understanding lived experiences. o Uses interviews, observations, and case studies. The Need for Integration: o Relying solely on quantitative data limits deep understanding. o Relying only on qualitative insights risks limited generalizability. 3. Grounded Theory as a Bridge Between Quantitative and Qualitative Research What is Grounded Theory? o Developed by Glaser & Strauss (1967): It is an inductive research method that builds theories directly from data. o Researchers collect rich, in-depth data and refine theories through constant comparison. Steps in Grounded Theory Analysis: 1. Theoretical Sampling: Selecting cases that provide contrasting insights. 2. Open Coding: Identifying key themes in the data. 3. Axial Coding: Linking categories to identify relationships. 4. Selective Coding: Developing a central theory from findings. 5. Theoretical Saturation: Stopping data collection once no new insights emerge. Benefits of Grounded Theory: o Allows for data-driven discovery of concepts. o Captures social processes and interactions. o Provides flexibility in adapting research focus as new themes emerge. 4. Ensuring Rigor in Qualitative Research To counter the bias against qualitative research, the authors propose methods to improve trustworthiness: Credibility (Internal Validity): o Prolonged engagement in the field. o Triangulation: Using multiple data sources. o Peer debriefing to check biases. Transferability (External Validity): o Thick description: Providing detailed context for findings. o Linking insights to existing theory. Dependability (Reliability): o Clear sampling strategy. o Transparent coding procedures. Confirmability (Objectivity): o Separating first-order (participant) and second-order (researcher) interpretations. o Keeping an audit trail of coding decisions. 5. The Value of Combining Qualitative and Quantitative Research Triangulation (Jick, 1979): o Using multiple methods to verify insights. o Reduces biases inherent in a single method. Examples of Combining Methods: o Ziedonis (2004): Used qualitative interviews to identify reasons for patent races, then conducted quantitative hypothesis testing. o Gioia & Thomas (1996): Developed a theoretical model from interviews, then validated it through a large-scale survey. o Sutton & Rafaeli (1988): Found surprising quantitative results and used qualitative interviews to explain the pattern. Trade-offs in Theory Building (Weick, 1979): o Research must balance simplicity (ease of understanding), accuracy (faithfulness to reality), and generalizability (applicability to other contexts). o Qualitative research is accurate but complex. o Quantitative research is simple and generalizable but less detailed. o A combination of both leads to stronger theories. 6. Conclusion: A Call for Methodological Pluralism Qualitative and quantitative research should not be seen as opposites but as complementary approaches. Theory-building requires rich qualitative insights before large-scale testing can validate findings. Researchers should be trained in both methods to maximize research impact. Key Takeaways for Exam Preparation 1. What is the main argument of the paper? o The best theories are built by integrating qualitative (theory-building) and quantitative (theory-testing) methods. 2. How does grounded theory help in qualitative research? o Grounded theory is an inductive method that develops concepts directly from data through theoretical sampling, coding, and constant comparison. 3. What are the main challenges in qualitative research? o Lack of replicability, bias concerns, and difficulty in generalizing findings. 4. How can qualitative research be made rigorous? o Triangulation, thick description, transparent coding, and audit trails. 5. Why is it useful to combine qualitative and quantitative research? o Qualitative research explains "why" something happens, while quantitative research tests how often or to what extent it happens. Final Exam Tip Expect questions on: Definitions of functionalist vs. interpretivist paradigms. Steps in grounded theory and coding methods. Ways to improve rigor in qualitative research. Examples of studies that successfully combine methods. - On some of the core ingredients: Roy Suddaby (2006) - "What Grounded Theory Is Not." The Academy of Management Journal, 49(4), 633-642. Key Purpose of the Article Suddaby critiques the misuse and misinterpretation of grounded theory in management research. He highlights common misconceptions and clarifies what grounded theory truly entails, emphasizing its inductive nature, iterative data analysis, and theoretical development. 1. Definition & Origins of Grounded Theory Developed by Glaser & Strauss (1967) as a reaction to positivism in social science. A method for theory generation rather than testing existing theories. Based on symbolic interactionism and pragmatism, meaning: o Reality is socially constructed. o Knowledge emerges from observing and interpreting social interactions. Key Characteristics of Grounded Theory 1. Constant Comparison – Data collection and analysis occur simultaneously. 2. Theoretical Sampling – New data is collected based on emerging insights. 3. Conceptualization over Description – Focuses on abstracting data into theories, rather than just summarizing facts. 2. Common Misconceptions About Grounded Theory Suddaby identifies six common mistakes researchers make when claiming to use grounded theory: 1) Grounded Theory is Not an Excuse to Ignore the Literature Many assume grounded theory requires entering the field with no prior knowledge. Reality: o Researchers must engage with existing literature to frame meaningful research questions. o However, they should avoid pre-determined hypotheses to allow theory to emerge from data. 2) Grounded Theory is Not Just Presenting Raw Data Some researchers present long verbatim interview transcripts without theorizing. Reality: o Grounded theory requires abstraction, where raw data is coded and categorized into higher-order concepts. o The goal is to develop theory, not just document participant stories. 3) Grounded Theory is Not Hypothesis Testing Some papers use interviews to "test" pre-existing hypotheses. Reality: o Grounded theory is exploratory and inductive, meaning researchers discover rather than test theories. o It should be used when theories do not yet exist or are underdeveloped. 4) Grounded Theory is Not Just Content Analysis or Word Counting Some researchers rely on word frequency analysis to claim grounded theory. Reality: o Grounded theory looks at meaning and social interactions, not just word patterns. o Content analysis is quantitative, whereas grounded theory is qualitative and interpretive. 5) Grounded Theory is Not a Mechanical Formula Many believe following a set number of interviews (e.g., 25-30) guarantees a valid grounded theory. Reality: o The number of interviews depends on theoretical saturation, meaning data collection stops when no new insights emerge. o Grounded theory requires researcher creativity and intuition in coding and categorization. 6) Grounded Theory is Not Easy Some believe grounded theory is simple because it lacks strict statistical analysis. Reality: o Requires deep analytical skills to identify patterns, relationships, and theoretical contributions. o The researcher is an active interpreter, meaning bias and reflexivity must be carefully managed. 3. Practical Recommendations for Conducting Grounded Theory Research Suddaby advises researchers to: 1. Engage with prior literature but remain open to unexpected findings. 2. Go beyond describing data by conceptualizing key themes. 3. Use grounded theory for discovery, not hypothesis testing. 4. Develop higher-order theories, rather than focusing on word counts. 5. Be flexible and let data guide analysis, rather than following rigid rules. 6. Acknowledge the complexity of grounded theory and be transparent in methodological choices. 4. Conclusion: Why Grounded Theory Matters Grounded theory helps researchers develop new insights in areas where little theory exists. It is not a quick fix or formula, but a rigorous method requiring deep engagement with data. When used correctly, it produces powerful theories that explain how individuals construct meaning in social contexts. Key Takeaways for Exam Preparation 1. What is grounded theory? o An inductive research method that builds theory from systematic qualitative data analysis. 2. What are the two main processes in grounded theory? o Constant Comparison (simultaneous data collection & analysis). o Theoretical Sampling (deciding what data to collect next based on emerging insights). 3. What are the six misconceptions of grounded theory? o Ignoring literature o Presenting raw data without analysis o Using it for hypothesis testing o Confusing it with content analysis o Applying it as a rigid formula o Assuming it is easy 4. How do you ensure rigor in grounded theory research? o Engage with literature but stay open to emerging data. o Conceptualize beyond raw data. o Ensure theoretical saturation before stopping data collection. o Be transparent in explaining methods. Maxwell, J. A. (1992). "Understanding and Validity in Qualitative Research." Harvard Educational Review, 62(3), 279-300. 1. Purpose of the Article Maxwell critiques traditional notions of validity in research, which are primarily based on quantitative epistemologies. He argues that qualitative research requires different criteria to ensure validity because it focuses on understanding meaning and context rather than statistical generalizability. Key Argument: Traditional validity concepts (from positivism) are not suitable for qualitative research. Qualitative research emphasizes understanding over measurement. Validity in qualitative research should focus on credibility, interpretative accuracy, and theoretical coherence. 2. The Problem with Traditional Validity Criteria Traditional research validity is often assessed using internal validity, external validity, reliability, and objectivity. Maxwell argues that these are insufficient for qualitative research because: Internal validity (causal inference) does not apply when studying lived experiences. External validity (generalizability) is not a primary concern in qualitative studies. Reliability (consistency of results) overlooks the evolving nature of qualitative data. Objectivity (detachment of the researcher) ignores the role of researcher reflexivity in interpreting meaning. Instead, Maxwell proposes five alternative validity types tailored for qualitative research. 3. Five Types of Validity in Qualitative Research 1) Descriptive Validity Definition: The factual accuracy of qualitative data. Example: If a researcher claims a participant said something, did they accurately report the words? How to Ensure It? o Use recordings, detailed field notes, and verbatim transcripts. o Cross-check descriptions with multiple observers. 2) Interpretive Validity Definition: The accuracy of the researcher’s interpretation of participants’ meanings. Example: If a participant discusses their workplace stress, does the researcher understand their experience correctly? How to Ensure It? o Member checking: Ask participants to confirm interpretations. o Thick description: Provide rich, detailed context to support interpretations. 3) Theoretical Validity Definition: The extent to which concepts accurately represent the phenomena studied. Example: If a study claims to identify "resilience strategies" in entrepreneurs, are these really resilience strategies or just coping mechanisms? How to Ensure It? o Use multiple theoretical frameworks to interpret findings. o Compare emergent concepts with existing literature. 4) Generalizability (Transferability) Definition: The applicability of findings beyond the studied setting. Example: Can a study on startup resilience in New York apply to entrepreneurs in Europe? How to Ensure It? o Provide detailed contextual information so readers can judge if findings apply elsewhere. o Use purposeful sampling to increase transferability. 5) Evaluative Validity Definition: The appropriateness of value judgments in research. Example: If a researcher labels a leadership style as "toxic", is this based on empirical data or personal bias? How to Ensure It? o Clearly distinguish data-driven interpretations from subjective opinions. o Use triangulation (multiple perspectives) to validate ethical judgments. 4. The Role of Reflexivity in Validity Maxwell highlights researcher reflexivity as crucial for qualitative validity. What is Reflexivity? o The researcher’s awareness of their influence on data collection and analysis. Why is it Important? o Ensures bias does not distort findings. How to Apply It? o Keep analytical memos. o Discuss alternative explanations of data. o Acknowledge personal assumptions in research write-ups. 5. Implications for Qualitative Research What Should Qualitative Researchers Do? Move beyond positivist criteria: Validity is not about objectivity but about ensuring accurate understanding. Be transparent in methodology: Describe how interpretations were formed. Use multiple methods: Triangulation improves validity. Engage with participants: Member-checking helps confirm accurate interpretations. 6. Conclusion Validity in qualitative research should not be about statistical measures but about accurately capturing human experience. Five types of validity (descriptive, interpretive, theoretical, generalizability, and evaluative) provide better criteria for assessing qualitative research. Researcher reflexivity is key to maintaining credibility in qualitative research. Transparency and engagement with data are critical for producing trustworthy findings. Key Takeaways for Exam Preparation 1. What is the main argument of the paper? o Traditional validity measures do not apply to qualitative research. Instead, researchers should focus on interpretation, credibility, and transferability. 2. What are the five types of validity in qualitative research? o Descriptive Validity (accuracy of facts). o Interpretive Validity (accuracy of meaning). o Theoretical Validity (conceptual accuracy). o Generalizability (transferability to other settings). o Evaluative Validity (avoiding researcher bias in value judgments). 3. What is reflexivity, and why is it important? o Reflexivity is the researcher’s awareness of their influence on the research. o It ensures bias does not distort data collection and interpretation. 4. How can qualitative research improve validity? o Member checking (asking participants to confirm interpretations). o Triangulation (using multiple sources of data). o Providing thick descriptions (rich, detailed data). o Clearly explaining research assumptions. Final Exam Tip Expect questions on: Why traditional validity criteria do not apply to qualitative research. Definitions and examples of the five types of qualitative validity. How researcher reflexivity influences qualitative analysis. Strategies for ensuring rigor in qualitative research. Van Maanen, J. (1979). "The Fact of Fiction in Organizational Ethnography." Administrative Science Quarterly, 24(4), 539-550. 1. Purpose of the Article Van Maanen critiques the assumption that ethnographic research purely represents "facts." He argues that ethnography involves interpretation, subjectivity, and storytelling—making it a form of "fiction" in some respects. He examines how ethnographers construct narratives about organizations and the methodological challenges they face in distinguishing fact from fiction. 2. The Nature of Ethnography Ethnography is not just observation; it is an immersive process where the researcher becomes involved in the setting. The goal of ethnography is to describe organizational culture through firsthand experience. Researchers must interpret behaviors, interactions, and narratives—but these interpretations are not purely objective facts. The challenge: Ethnographers often form theories too early and then shape data to fit those theories. 3. First-Order vs. Second-Order Concepts Van Maanen distinguishes between two types of ethnographic understanding: First-Order Concepts: o The "facts" as reported by participants. o Examples: A police officer describing their daily routine, a company worker explaining a procedure. Second-Order Concepts: o The researcher’s interpretation of first-order concepts. o Example: A researcher explaining that police "call-jumping" (one officer taking another’s call) is a sign of internal competition. Key Issue: Second-order concepts are shaped by the researcher’s perspective, meaning ethnography is always interpretative rather than purely factual. 4. Presentational vs. Operational Data Van Maanen further differentiates between two types of ethnographic data: Presentational Data (What participants say): o How people want to be perceived (e.g., "We don’t use racial profiling"). o These are often idealized descriptions rather than reality. Operational Data (What people do): o Actual observed behaviors (e.g., officers using subtle forms of racial profiling). o Researchers must contrast what people say vs. what they actually do. Key Challenge: Ethnographers must recognize that people often misrepresent reality, either intentionally (to look good) or unconsciously (because they don’t realize their biases). 5. The Fiction of Ethnography Van Maanen argues that all ethnographic accounts contain an element of fiction because: Interpretation is always involved—researchers filter observations through their own worldview. Observations are selective—researchers choose which data to include. Language shapes meaning—writing styles and narrative structures affect how findings are perceived. Example: A researcher might emphasize dramatic incidents (like police misconduct) while downplaying routine, uneventful interactions, creating a biased image of the organization. 6. The Risk of Misinterpretation Van Maanen identifies three key risks in ethnography: 1. Participants Lying or Hiding Information o People may misrepresent their actions to protect themselves. o Example: Police officers deny engaging in "street justice" (extra-legal punishment), even if they do. 2. Participants Being Unaware of Their Own Behavior o Many organizational norms are taken for granted and difficult to articulate. o Example: Employees follow unspoken workplace rules but can’t explain why they act that way. 3. Researcher Bias and Selective Perception o Ethnographers bring their own assumptions, which shape how they interpret data. o Example: A researcher studying police culture might overemphasize corruption if they expect to find it. 7. Ethnography as an "Illusion" Ethnographers hope to uncover objective truths, but their findings are always shaped by interpretation. This does not mean ethnography is invalid, but it requires careful self- awareness and reflexivity. Researchers must constantly question their assumptions and acknowledge subjectivity. Key Takeaway: The best ethnographic research is transparent about its limitations and interpretative nature rather than claiming pure objectivity. 8. Conclusion: Ethnography as a Balance Between Fact and Fiction Ethnographers construct reality rather than simply reporting it. Distinguishing between what is said (presentational data) and what is done (operational data) is crucial. Good ethnographic research requires constant self-reflection, skepticism, and critical analysis. Key Takeaways for Exam Preparation 1. What is the main argument of the paper? o Ethnography is not purely objective; it contains elements of fiction because it involves interpretation, storytelling, and selective representation of facts. 2. What are first-order and second-order concepts? o First-order concepts: The "facts" as reported by participants. o Second-order concepts: The researcher’s interpretation of those facts. 3. What is the difference between presentational and operational data? o Presentational data: What people say they do. o Operational data: What people actually do. 4. Why is ethnography a form of "fiction"? o Because it involves selective observation, interpretation, and narrative construction. 5. What are the main risks in ethnographic research? o Lies and deception from participants. o Unconscious biases from both participants and researchers. o Researcher subjectivity in framing and interpreting data. Final Exam Tip Expect questions on: How ethnographers interpret social behavior. Why ethnography is always partly subjective. Examples of presentational vs. operational data. How researchers can reduce bias in ethnography. - On the main types of methods: Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2012). "Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology." Organizational Research Methods, 16(1), 15-31. 1. Purpose of the Article The article introduces the Gioia Methodology, a systematic approach to qualitative inductive research aimed at improving rigor in theory development. The authors argue that while qualitative research is rich in discovery, it often lacks methodological clarity. The Gioia Method provides a structured way to generate new concepts and build grounded theories, balancing creativity and systematic rigor. Key Research Questions: 1. How can qualitative research achieve scholarly rigor while preserving the creativity needed for theoretical breakthroughs? 2. How can researchers demonstrate systematic links between data and emerging theories? 2. The Need for Rigor in Inductive Research Traditional research focuses on construct measurement—quantifying predefined variables. Inductive research (Grounded Theory) is about concept development— creating new theoretical insights from data. Problem: Many qualitative studies lack transparency in how theories emerge from data, leading to skepticism in academia. Solution: The Gioia Methodology introduces a structured yet flexible approach to building theory from qualitative data. 3. The Core Principles of the Gioia Method The methodology is based on two key assumptions: 1. The Organizational World is Socially Constructed o Reality is shaped by people’s interactions, beliefs, and interpretations. o Research should focus on how individuals make sense of their experiences. 2. Informants are Knowledgeable Agents o Participants understand their own experiences and can articulate meaningful insights. o Researchers should listen first, interpret later. Implication: Researchers should avoid imposing existing theories on data and instead let participants' voices drive conceptual development. 4. The Gioia Methodology: A Systematic Inductive Approach The Gioia Method involves three structured steps: Step 1: First-Order Analysis (Informant-Centric) Researchers code interviews and observations using participant language. No attempt to categorize or impose theory yet—this step preserves authenticity. Often results in many categories (50-100 concepts initially). Key Idea: Let participants define what is important rather than applying pre- existing frameworks. Step 2: Second-Order Analysis (Researcher-Centric) The researcher identifies patterns, themes, and relationships across first- order categories. Categories are merged, refined, and abstracted into theoretical constructs. This is where theoretical insights begin to emerge. Key Question: What underlying patterns explain these diverse participant responses? Step 3: Data Structure and Theory Building The data structure visually organizes first-order concepts into second-order themes. Helps demonstrate the rigor of the analysis and how raw data lead to theory. The final step involves building a dynamic grounded theory model. Outcome: A theory that connects empirical data to conceptual insights in a structured, replicable manner. 5. Example: How the Method Works in Practice The authors illustrate their method with a study on strategic change in an academic institution: 1. First-Order Categories: Participants used terms like "strategic," "political," and "environmental" to describe key issues. 2. Second-Order Themes: These were grouped into broader themes like "Sensemaking in Uncertainty" and "Identity Work." 3. Grounded Theory Model: A conceptual framework was developed explaining how managers navigated organizational change. Key Lesson: This step-by-step process ensures rigor while allowing for the discovery of new concepts. 6. Advantages of the Gioia Methodology 1. Enhances Transparency o Shows a clear, step-by-step path from raw data to theory. o Increases credibility and replicability. 2. Balances Inductive Creativity with Rigor o Encourages theoretical breakthroughs while maintaining analytical discipline. 3. Makes Qualitative Research More Defensible o Addresses criticisms of subjectivity in qualitative studies. Key Impact: The method has been widely adopted in management, strategy, and organizational studies. 7. Conclusion: A New Standard for Inductive Research The Gioia Methodology provides a structured way to develop theories from qualitative data. Researchers should preserve participant voice, look for patterns, and build data-driven conceptual frameworks. The method ensures qualitative research is both insightful and rigorous. Final Takeaway: The best qualitative research captures complexity, remains systematic, and generates new theoretical insights. Key Takeaways for Exam Preparation 1. What is the Gioia Methodology? o A structured approach to qualitative inductive research that emphasizes concept discovery and theory development. 2. What are the three main steps in the method? o First-Order Analysis (Participant-Centric) → Second-Order Themes (Researcher-Centric) → Data Structure & Theory Building. 3. Why is the Gioia Methodology important? o It provides rigor, credibility, and transparency to qualitative research. 4. How does it differ from traditional qualitative methods? o It explicitly links data to emerging theory through structured analysis. 5. What are the benefits of using this approach? o Systematic yet flexible, ensures rigor, facilitates concept discovery, and helps researchers defend their findings. Final Exam Tip Expect questions on: Steps of the Gioia Methodology. Why qualitative research lacks credibility and how the Gioia Method addresses this. How first-order and second-order coding works. The importance of building a data structure in qualitative research. Eisenhardt, K. M., & Graebner, M. E. (2007). "Theory Building from Cases: Opportunities and Challenges." Academy of Management Journal, 50(1), 25-32. 1. Purpose of the Article The article discusses theory building from case studies, emphasizing its importance in developing new theoretical constructs, relationships, and midrange theories. The authors explain how case study research can bridge rich qualitative insights with rigorous theory development, addressing common misconceptions and challenges in publishing case-based research. 2. The Value of Theory Building from Cases Case study research is a powerful tool for inductive theory development, often leading to groundbreaking contributions. It balances rich empirical data with structured theoretical development. Many highly cited management studies (e.g., Mintzberg, Eisenhardt) have used case-based theory-building approaches. Multiple-case studies provide stronger theoretical insights than single-case studies. Key Idea: Case-based research is particularly useful when existing theories do not adequately explain a phenomenon, or when studying complex social processes that are difficult to quantify. 3. Key Characteristics of Case-Based Theory Building Inductive and Iterative: Theories emerge from systematic comparison of cases rather than pre-existing hypotheses. Rich Empirical Data: Uses multiple data sources (interviews, observations, archives). Replication Logic: Each case serves as an experiment, refining the emerging theory. Contrast with Deductive Research: Deductive research tests existing theories using hypotheses and statistical methods. Inductive research creates new theories by analyzing patterns in case data. 4. Case Selection: Theoretical Sampling vs. Random Sampling Common Misconception: Cases should be randomly selected for generalizability. Reality: Theoretical sampling is used to maximize theoretical insights, not for statistical representativeness. Theoretical Sampling Strategies: 1. Extreme Cases: Studying highly successful or failed firms to identify strong patterns. 2. Contrasting Cases: Comparing cases with opposite outcomes to refine theoretical relationships. 3. Replication Logic: Selecting cases that confirm or challenge emerging theories. Example: A study on acquisitions may include: o Cases where firms successfully merged. o Cases where acquisitions failed. o Cases where firms chose not to merge, to explore alternative explanations. 5. The Role of Multiple Cases in Theory Development Single Case Studies: o Provide deep insights but limited generalizability. o Used for exploring unique phenomena (e.g., Weick’s Mann Gulch disaster study). Multiple Case Studies: o Provide stronger, more generalizable theories. o Allow for replication logic, making findings more robust. o Enable comparison across cases to identify patterns. Key Takeaway: Multiple cases = Stronger theory Single case = Deep insight but limited generalization 6. Dealing with Interview Data in Case Studies Problem: Critics argue that interviews introduce bias due to retrospective sensemaking and social desirability effects. Solution: 1. Use multiple informants to cross-check perspectives. 2. Combine interviews with archival data, observations, and real-time case tracking. 3. Balance retrospective cases (historical analysis) with real-time cases (ongoing observation). Example: A study on acquisitions used interviews with: o CEOs of acquired firms. o CEOs of acquiring firms. o Board members. o Investment bankers (to provide external perspectives). 7. Presenting Empirical Evidence Challenge: Case-based research lacks the compact numerical tables of statistical studies. Solution: o Use "construct tables" that summarize key case insights. o Clearly link theoretical constructs to empirical evidence. o Use direct quotes and detailed descriptions while keeping the focus on theory. Example: A case study on innovation processes might: o Provide short narratives of how each firm experimented with new products. o Summarize key themes in a table of emerging constructs. 8. Writing and Publishing Case-Based Research Common Challenge: Reviewers often misunderstand case research, expecting statistical generalization or thick descriptive narratives. Solution: 1. Clearly justify why an inductive approach is necessary. 2. Define theoretical sampling and case selection criteria. 3. Provide detailed but structured evidence linking data to emerging theory. 4. Explain how the study builds new theory rather than just describing cases. Key Advice: Clarify research goals early: Are you developing a new theory or extending an existing one? Use visuals (diagrams, tables) to structure findings. Be transparent about data collection and analysis methods. 9. Conclusion: Why Case-Based Research Matters Case studies are essential for developing new theories in complex, real-world contexts. Multiple-case studies provide robust theoretical insights by allowing systematic comparison. Proper case selection and data triangulation enhance research rigor. Publishing case-based research requires clear justification and structured presentation. Final Takeaway: Case-based theory building bridges qualitative richness with theoretical rigor, making it one of the most valuable methods in management research. Key Takeaways for Exam Preparation 1. What is theory-building from cases? o A research strategy that develops new theories from empirical case data rather than testing existing hypotheses. 2. How does theoretical sampling differ from random sampling? o Theoretical sampling selects cases based on their ability to generate insights, not for statistical representativeness. 3. Why are multiple-case studies better for theory building? o They allow replication logic, increasing the validity and generalizability of emerging theories. 4. How can interview bias be minimized? o Use multiple informants, archival data, and real-time observation to cross-check findings. 5. How should case-based research be presented? o Use construct tables, direct quotes, and systematic analysis to connect empirical data with theoretical development. Final Exam Tip Expect questions on: How theory-building from cases differs from hypothesis testing. Theoretical sampling vs. random sampling. Single vs. multiple case studies. Strategies to improve case study rigor (e.g., triangulation, construct tables). Common challenges in publishing case-based research. Langley, A. (1999). "Strategies for Theorizing from Process Data." Academy of Management Review, 24(4), 691-710. 1. Purpose of the Article The article explores different strategies for theorizing from process data, emphasizing the challenges of extracting theory from qualitative, time-sensitive, and event-based research. Langley discusses seven key strategies for analyzing process data and highlights how theory and method are interwoven. The goal is to develop accurate, parsimonious, generalizable, and useful theories from complex, dynamic data. 2. Understanding Process Data and Process Theory Process data deal with sequences of events over time rather than static variables. Process theories explain how and why things happen rather than just measuring what happens. Unlike variance theories (which rely on dependent/independent variable relationships), process theories focus on the temporal sequence of events leading to an outcome. Key Challenge: Process data are often messy, multi-level, temporally embedded, and eclectic, making it difficult to extract theoretical insights. Researchers must decide how to organize and analyze these complex sequences effectively. 3. The Seven Strategies for Theorizing from Process Data Langley identifies seven distinct approaches to process data analysis, each with strengths and weaknesses. 1) Narrative Strategy Description: o Constructs detailed stories from process data, maintaining rich context and complexity. o Often used in ethnography and case studies (e.g., Pettigrew, 1985). Strengths: o High accuracy in capturing real-world complexity. o Useful for understanding context and meaning. Weaknesses: o Low generalizability—findings may not apply broadly. o Limited theory-building capacity—stories remain descriptive unless combined with other strategies. 2) Quantification Strategy Description: o Converts qualitative process data into numerical codes and statistical models. o Uses event-history analysis, sequence analysis, or log-linear models. Example: o Van de Ven & Poole (1990) coded innovation processes using binary coding (0/1) for different activities. Strengths: o Increases rigor and replicability. o Can identify statistical patterns across cases. Weaknesses: o Loss of depth and complexity—simplifies rich qualitative data. o Hard to apply when processes involve emotions, interpretations, and nonlinear relationships. 3) Alternate Templates Strategy Description: o Compares process data using multiple theoretical lenses. o Example: Allison’s (1971) Cuban Missile Crisis study compared rational actor, organizational process, and political power models. Strengths: o Allows researchers to test different explanations for the same events. o Enhances theoretical clarity by revealing which model best explains the data. Weaknesses: o Can create ambiguity—which framework is "correct"? o May overlook emergent theories by focusing on pre-existing models. 4) Grounded Theory Strategy Description: o Uses inductive, iterative coding to build theory from raw data (Glaser & Strauss, 1967). o Identifies emergent categories and theoretical constructs. Example: o Sutton (1987) used grounded theory to study organizational decline. Strengths: o Deeply rooted in data—high credibility and accuracy. o Allows new theories to emerge. Weaknesses: o Can be time-consuming and difficult to generalize. o Often lacks structured theoretical framing. 5) Visual Mapping Strategy Description: o Represents process sequences visually using timelines, flowcharts, and diagrams. o Helps in organizing complex temporal relationships. Example: o Langley & Truax (1994) used process maps for technology adoption. Strengths: o Simplifies complex processes for clearer understanding. o Useful for identifying patterns and interactions. Weaknesses: o May oversimplify by focusing on surface structures rather than underlying mechanisms. 6) Temporal Bracketing Strategy Description: o Divides process data into meaningful time periods ("brackets") to analyze sequences separately. o Identifies causal relationships between events in different time periods. Example: o Barley (1986) studied how technology adoption influenced professional structures over multiple time periods. Strengths: o Helps isolate key mechanisms driving change. o Useful for identifying feedback loops and cyclical patterns. Weaknesses: o Difficult to determine "natural" brackets in some datasets. o May miss interactions across time periods. 7) Synthetic Strategy Description: o Converts qualitative process narratives into comparable, measurable constructs. o Used to compare multiple cases and develop broader theories. Example: o Eisenhardt (1989) coded decision-making processes across 8 cases to build a theory on speed in high-velocity environments. Strengths: o Balances rigor and flexibility. o Allows for comparisons across cases. Weaknesses: o Loses temporal richness by summarizing processes into fixed constructs. o Risks oversimplifying dynamic interactions. 4. Comparing the Strategies: Trade-Offs Langley evaluates the strategies based on three theoretical criteria: 1. Accuracy (Closeness to Data) o High: Narrative, Grounded Theory, Temporal Bracketing o Moderate: Visual Mapping, Alternate Templates o Low: Quantification, Synthetic 2. Simplicity (Clarity and Parsimony) o High: Quantification, Synthetic o Moderate: Visual Mapping, Alternate Templates o Low: Narrative, Grounded Theory 3. Generality (Applicability Across Contexts) o High: Quantification, Synthetic o Moderate: Alternate Templates, Temporal Bracketing o Low: Narrative, Grounded Theory Key Insight: No single strategy is best—each has trade-offs. Combining multiple approaches enhances theoretical robustness. 5. Conclusion: Theorizing as an Iterative Process Theorizing from process data is complex—no single approach guarantees success. Combining multiple strategies (e.g., narrative + quantification + visual mapping) can improve rigor. Process research requires creativity—rigid frameworks can limit discovery. Induction, deduction, and inspiration all play a role in developing robust process theories. Final Takeaway: Process theorizing is both an art and a science, requiring a balance between empirical rigor, conceptual clarity, and theoretical innovation. Key Takeaways for Exam Preparation 1. What are process data and process theory? o Data involving sequences of events over time. o Theory explaining how and why changes occur. 2. What are the seven strategies for analyzing process data? o Narrative, Quantification, Alternate Templates, Grounded Theory, Visual Mapping, Temporal Bracketing, and Synthetic Strategy. 3. Which strategies offer accuracy vs. generalizability? o Narrative/Grounded Theory = High Accuracy o Quantification/Synthetic = High Generalizability 4. How can researchers enhance process theorizing? o Combine multiple strategies for better insights. Gehman, J., Glaser, V. L., Eisenhardt, K. M., Gioia, D., Langley, A., & Corley, K. G. (2017). "Finding Theory–Method Fit: A Comparison of Three Qualitative Approaches to Theory Building." Journal of Management Inquiry, 27(3), 284-300. DOI: 10.1177/1056492617706029 1. Purpose of the Article This article provides a comparative analysis of three major qualitative research approaches for theory building: 1. Gioia Methodology (Grounded, Interpretivist Approach) 2. Eisenhardt Methodology (Theory Building from Cases) 3. Langley’s Process Research Approach The authors emphasize the importance of “theory–method fit”, arguing that researchers should align their research questions, methods, and theoretical contributions. The paper synthesizes discussions from a 2016 Academy of Management symposium, where these leading scholars debated similarities, differences, and best practices in qualitative research. Key Question: How can researchers ensure that their qualitative methods align with their theoretical goals? 2. The Three Qualitative Approaches to Theory Building Each approach has different assumptions, techniques, and goals. 1) The Gioia Methodology (Interpretivist, Grounded Theory) Developed by Denny Gioia, this approach is interpretivist and focuses on how people make sense of their experiences. Emphasizes inductive theory building, where concepts emerge from participants’ own words. Uses first-order (informant-centric) and second-order (researcher-centric) coding to structure data. Requires a data structure that organizes raw data into themes and theoretical constructs. Best for: Studying meaning, identity, and social constructions. Understanding participants' perspectives in depth. Example: Gioia & Chittipeddi (1991) studied sensemaking and sensegiving in organizational change. 2) Eisenhardt Methodology (Theory Building from Cases) Developed by Kathleen Eisenhardt, this approach focuses on building midrange theory through comparative case studies. Uses multiple cases for replication logic, strengthening the validity of emergent theories. Balances induction and deduction—new theories emerge from case data but are tested against existing literature. Control of variance is key—researchers select cases strategically to compare differences and similarities. Best for: Studying organizational strategies, decision-making, and innovation. Comparing how different firms or individuals respond to similar conditions. Example: Eisenhardt (1989) studied decision-making speed in high-velocity environments using multiple case comparisons. 3) Langley’s Process Research Approach Developed by Ann Langley, this approach focuses on process theorizing— explaining how phenomena unfold over time. Uses multiple analytical strategies, including: o Narrative strategy (detailed case descriptions) o Visual mapping (timelines and flowcharts) o Temporal bracketing (dividing processes into phases) Less focused on static concepts, more on temporal evolution and dynamics. Best for: Studying organizational change, strategy evolution, and long-term adaptation. Understanding how events interact over time. Example: Langley (1999) studied organizational change processes using different process analysis strategies. 3. Key Similarities Across the Approaches All three methods use rich qualitative data (interviews, observations, archival records). All aim to develop strong theoretical contributions. Each acknowledges the importance of iteration between data and theory. Commonality: They all emphasize qualitative rigor, but each balances inductive discovery vs. theoretical structuring differently. 4. Key Differences Across the Approaches Gioia Method Eisenhardt Method Langley Method Aspect (Interpretivist) (Comparative Case) (Process Theory) Theoretical Understanding social Building generalizable Explaining how Goal constructions midrange theories processes unfold Typically single case Multiple cases for Longitudinal process Use of Cases or a few comparison data Coding First-order & second- Cross-case pattern Temporal bracketing & Approach order coding recognition visual mapping Participants’ actions Role of Knowledgeable Cases are shape evolving Participants agents comparative units processes Example Identity, meaning, Strategy, decision- Organizational change, Studies sensemaking making, innovation adaptation Key Takeaway: Choosing the right approach depends on whether the goal is meaning (Gioia), comparison (Eisenhardt), or process explanation (Langley). 5. Implications for Qualitative Researchers How to Choose the Right Method? The authors suggest matching the research question with the appropriate method: If studying individual meaning-making → Use Gioia’s approach. If comparing firms or strategies → Use Eisenhardt’s case study method. If studying processes over time → Use Langley’s process approach. Avoiding the “Method Mashup” A common mistake in qualitative research is combining citations from different approaches without considering their distinct ontologies. Example of a bad approach: Citing Gioia (sensemaking) and Eisenhardt (comparative cases) in the same study without explaining why. Theory–Method Fit is Crucial Researchers should be consistent in their epistemological stance. Blending methods is possible, but only if done carefully with clear justification. 6. Conclusion: The Need for Methodological Clarity Qualitative research is diverse—different methods serve different theory- building goals. Good qualitative research is about coherence—researchers should align their method with their theoretical aims. The authors encourage scholars to be explicit about their methodological choices rather than applying a mix-and-match approach. Final Takeaway: Gioia’s method = Meaning & sensemaking Eisenhardt’s method = Comparative, midrange theory Langley’s method = Processual, time-based theory Key Takeaways for Exam Preparation 1. What are the three qualitative approaches compared in the article? o Gioia (Interpretivist) → Meaning & identity o Eisenhardt (Comparative Case) → Midrange theory o Langley (Process Theory) → Time-based analysis 2. What is “theory–method fit” and why is it important? o Ensuring that the method aligns with theoretical goals. o Avoiding random citation mashups of different methodologies. 3. How do these approaches handle data? o Gioia: First-order/second-order coding o Eisenhardt: Cross-case pattern analysis o Langley: Temporal bracketing & process visualization 4. How should researchers select the best approach? o Based on whether their study focuses on meaning, comparison, or process evolution. Final Exam Tip Expect questions on: Comparing the three approaches When to use each method Why “theory–method fit” matters Examples of studies that use each approach - On theorising from qualitative data: Grodal, S., Anteby, M., & Holm, A. L. (2020). "Achieving Rigor in Qualitative Analysis: The Role of Active Categorization in Theory Building." Academy of Management Review, 45(3), 247-265. DOI: 10.5465/amr.2018.0482 1. Purpose of the Article This article addresses the longstanding debate on rigor in qualitative research, particularly in the process of moving from data to theory. The authors argue that qualitative analysis is fundamentally a categorization process, yet categorization theory has not been fully applied to qualitative research methodologies. Key Question: How can qualitative researchers increase rigor in theory building through active categorization? The authors develop a framework of eight key analytical moves that qualitative researchers use to construct theory from data. They emphasize the need for reflexivity and transparency in qualitative analysis. 2. The Core Argument: Active Categorization in Theory Building The authors argue that theorizing is an active process—researchers do not simply extract categories from data but actively shape categories through a series of analytical moves. If researchers do not clearly articulate these moves, their analytical process lacks transparency and rigor. Why is Categorization Important in Qualitative Research? Categorization is the foundation of qualitative theory-building—data are grouped into themes that become theoretical constructs. Researchers make subjective choices when forming categories, yet these choices are often not explicitly stated. Without transparency, readers cannot assess how researchers moved from raw data to theory, reducing credibility and rigor. Key Takeaway: By acknowledging and articulating their categorization moves, researchers can improve transparency, rigor, and methodological clarity. 3. The Framework: Eight Analytical Moves in Qualitative Research The authors identify eight key moves that researchers use when analyzing qualitative data. These moves guide the transition from raw data to structured theoretical insights. 1) Asking Questions Researchers must actively engage with their data by posing questions. Asking “What is happening here?” and “What does this data suggest?” can uncover new theoretical directions. Helps in generating initial categories. Example: A study on workplace culture might start with "How do employees express organizational identity?" leading to categories like rituals, symbols, and language. 2) Focusing on Puzzles Researchers should identify surprising, contradictory, or unexpected elements in the data. “Negative cases”—instances that do not fit existing theories—can trigger new insights. Example: A researcher studying leadership might notice that informal leaders exert more influence than formal managers, leading to a new theoretical model. 3) Dropping Categories Researchers often create many initial categories, but not all remain relevant. Categories that do not contribute to theoretical insights should be dropped. Example: A study on remote work might start with "employee dress code" as a category, but later drop it if it does not impact workplace productivity. 4) Merging Categories Combining related categories into higher-order concepts increases clarity. Helps in synthesizing large amounts of data. Example: “Team meetings” and “collaborative work sessions” might be merged under "Collective decision-making processes". 5) Splitting Categories The opposite of merging—breaking down broad categories into more precise subcategories. Useful when categories are too broad to be meaningful. Example: Instead of “employee motivation”, a researcher might split it into "intrinsic motivation" and "extrinsic motivation". 6) Relating and Contrasting Categories Identifying how categories interact with each other or stand in opposition. Helps in developing relationships between theoretical constructs. Example: A study on innovation might contrast “formal R&D processes” with “emergent innovation from employee experimentation”. 7) Sequencing Categories Ordering categories chronologically or causally to reveal patterns over time. Important for process research and longitudinal studies. Example: A study on entrepreneurship might sequence "Idea generation → Prototype development → Market entry". 8) Developing and Dropping Working Hypotheses Researchers may form initial hypotheses and refine or discard them based on data. This move ensures that the final theoretical model is robust and data-driven. Example: A study initially assumes “more diverse teams lead to better decision- making”, but later drops this hypothesis if the data suggest decision-making depends more on team structure than diversity. 4. Implications for Achieving Rigor in Qualitative Research The authors propose three key ways to increase rigor in qualitative research: 1. Be Reflexive o Researchers should actively reflect on how they construct categories rather than assuming categories "naturally emerge" from data. 2. Be Explicit o Clearly document analytical moves to make the research process transparent. o Instead of saying "themes emerged", researchers should explain how themes were constructed. 3. Embrace Methodological Plurality o Researchers should combine different categorization moves rather than rigidly following a single method. o No single approach (e.g., grounded theory, case study) should be seen as the only path to rigorous qualitative research. Final Takeaway: By actively constructing and transparently reporting their categorization process, qualitative researchers can enhance rigor and credibility in their studies. 5. Conclusion: Why This Article Matters This article challenges the assumption that qualitative theory-building is simply about discovering existing themes. Instead, it emphasizes that researchers actively create theoretical constructs through systematic analytical moves. Key Contributions: Bridges categorization theory with qualitative research methodologies. Provides a framework of eight analytical moves to guide rigorous qualitative analysis. Encourages greater reflexivity and transparency in qualitative research. Practical Applications: Helps doctoral students and early-career researchers improve their qualitative analysis. Guides reviewers and journal editors in assessing rigor in qualitative submissions. Encourages methodological innovation in management and organizational research. Key Takeaways for Exam Preparation 1. What is the core argument of the article? o Qualitative research is an active categorization process, not a passive discovery of themes. 2. What are the eight analytical moves researchers use to categorize data? o Asking Questions, Focusing on Puzzles, Dropping Categories, Merging Categories, Splitting Categories, Relating/Contrasting Categories, Sequencing Categories, Developing/Dropping Hypotheses. 3. How does active categorization improve rigor in qualitative research? o Enhances transparency, reduces researcher bias, and clarifies how theory is built from data. 4. How should qualitative researchers improve their analytical process? o Be reflexive, explicitly document their categorization process, and embrace multiple analytical moves. Final Exam Tip Expect questions on: The role of categorization in qualitative analysis How the eight moves help build theory Why transparency and reflexivity are crucial for rigor Klag, M., & Langley, A. (2012). "Approaching the Conceptual Leap in Qualitative Research." International Journal of Management Reviews, 15(2), 149-166. DOI: 10.1111/j.1468-2370.2012.00349.x 1. Purpose of the Article This article explores a fundamental challenge in qualitative research: How do researchers move from empirical data to abstract theoretical insights? This process, referred to as the “conceptual leap,” is often mysterious and difficult to formalize in research methodologies. The authors argue that conceptual leaps involve abductive reasoning and propose a dialectical framework to help researchers navigate this process more systematically. 2. What is a Conceptual Leap? A conceptual leap occurs when researchers move beyond raw data to formulate abstract theoretical ideas. It involves two key elements: 1. Seeing → Recognizing new patterns or insights in the data. 2. Articulating → Expressing those insights clearly in theoretical terms. Example: A researcher studying leadership behaviors in startups might initially collect interviews and observations. The conceptual leap occurs when they realize that founders use "crisis storytelling" to maintain team cohesion, leading to a new theoretical concept: Narrative Leadership in Startups. 3. The Role of Abductive Reasoning Abduction (Peirce, 1958) is a cognitive process where researchers generate new hypotheses by making unexpected connections between data and theory. It differs from induction (generalizing from data) and deduction (applying existing theory to data). Abduction allows researchers to make sense of surprises or contradictions in their findings. Example: A study on workplace culture might reveal that employees resist flexible work policies despite claiming to want autonomy. Instead of ignoring this contradiction, abductive reasoning pushes the researcher to ask: "What unseen social norms or power dynamics might explain this?" This leads to new theoretical insights about implicit organizational control mechanisms. 4. The Four Dialectic Tensions in Conceptual Leaping The authors propose that conceptual leaps emerge from navigating four key tensions: Dialectic Tension Description Example in Research Balancing systematic 1) Deliberation vs. A researcher studying analysis with unexpected Serendipity entrepreneurship finds an discoveries. Dialectic Tension Description Example in Research unexpected pattern in failed startups. Immersing deeply in data 2) Engagement vs. Taking breaks from coding to let but also stepping back for Detachment insights emerge naturally. new perspectives. Leveraging expertise while Experienced researchers must 3) Knowing vs. Not remaining open to new resist forcing data into pre-existing Knowing ideas. theories. 4) Self-Expression Balancing personal insights Researchers must engage with peer vs. Social with academic validation. feedback to refine their ideas. Connection Key Takeaway: The best research embraces both sides of each tension rather than relying too heavily on one. 5. How to Facilitate Conceptual Leaps? The authors suggest several techniques to stimulate theoretical insights: 1) Using Heuristics to Generate Theoretical Variety Meta-Theoretical Category Checklists → Exploring different theoretical lenses. Metaphors & Analogies → Applying concepts from other fields (e.g., "Organizational DNA"). Reversing Assumptions → Asking "What if the opposite were true?" Temporal Shifting → Considering how phenomena evolve over time. Example: A researcher studying mergers & acquisitions could apply biological metaphors (e.g., “organ rejection”) to understand why some mergers fail. 2) Encouraging Serendipity Expose yourself to diverse readings and disciplines → Insights often come from unexpected connections. Look for surprises in your data → The best discoveries come from contradictions. Allow incubation periods → Taking breaks can help insights crystallize. Example: A researcher studying leadership styles might accidentally notice that introverted leaders perform better in crisis situations—leading to new theory development. 3) Engaging Deeply with Data Constant Comparison → Comparing cases, time periods, or concepts. Asking Generative Questions → e.g., “What is happening here?” Using Visual Mapping → Sketching relationships between concepts. Example: A researcher studying remote work productivity might create a timeline to see how productivity patterns change over time. 4) Embracing Intellectual Humility Conceptual leaps require acknowledging uncertainty. Doubt fuels discovery—researchers must question their own assumptions. Junior researchers may have an advantage because they are less tied to existing theories. Example: A doctoral student, unfamiliar with dominant theories of corporate culture, might notice a novel pattern that more experienced scholars overlook. 6. The Role of Communication in Conceptual Leaping Writing is not just a way to present findings—it is a method of discovery. Memo-writing and visual diagrams help clarify ideas. Engaging with colleagues, reviewers, and mentors strengthens theoretical contributions. Example: A researcher struggling with an abstract concept might explain it to a colleague, leading to sudden clarity. 7. Conclusion: Conceptual Leaps as Bricolage The authors describe qualitative research as a form of "bricolage" (Lévi-Strauss, 1962): Researchers build theory from multiple sources. Combining structure (discipline) with openness (imagination) is key. Good theory-building requires both systematic effort and creative intuition. Final Takeaway: Conceptual leaps are not just lucky insights—they result from disciplined, yet flexible, qualitative research practices. By embracing both structure and creativity, researchers can enhance theoretical rigor and originality. Key Takeaways for Exam Preparation 1. What is a conceptual leap? o The transition from raw data to theoretical insight. o Involves seeing new patterns and articulating them clearly. 2. What is abductive reasoning and why is it important? o A mix of induction and deduction. o Helps explain unexpected findings. 3. What are the four dialectic tensions in qualitative research? o Deliberation vs. Serendipity o Engagement vs. Detachment o Knowing vs. Not Knowing o Self-Expression vs. Social Connection 4. What strategies help facilitate conceptual leaps? o Heuristics (checklists, metaphors, reversals). o Serendipity (reading widely, embracing surprises). o Deep engagement (constant comparison, visualization). o Communication (writing memos, discussing with peers). Final Exam Tip: Expect questions on: The nature of conceptual leaps How abductive reasoning works The four dialectic tensions Techniques for achieving rigor in qualitative research Pratt, M. G., Sonenshein, S., & Feldman, M. S. (2020). "Moving Beyond Templates: A Bricolage Approach to Conducting Trustworthy Qualitative Research." Organizational Research Methods, 1-28. DOI: 10.1177/1094428120927466 1. Purpose of the Article This article critiques the over-reliance on methodological templates in qualitative research and introduces methodological bricolage as a more flexible, creative, and context-sensitive approach. The authors argue that templates—such as the Eisenhardt method for case studies and the Gioia method for grounded theory— have become rigid formulas rather than guiding frameworks, limiting creativity and methodological diversity in qualitative research. Key Question: How can researchers maintain rigor and trustworthiness in qualitative research while moving beyond rigid methodological templates? Key Argument: The authors propose methodological bricolage, a flexible approach where researchers combine and adapt multiple methodological moves to fit their specific research questions and contexts. 2. Problems with Methodological Templates Methodological templates provide a structured, standardized way of conducting research. While they can simplify and increase accessibility for new researchers, they also pose serious challenges: 1. Encourages Rigid, Formulaic Research o Templates become rigid checklists that researchers follow without considering the unique needs of their study. o Reviewers and editors often demand adherence to these templates, further narrowing methodological choices. 2. Limits Theoretical Innovation o Templates constrain creativity by forcing researchers to fit findings into pre-defined structures. o Research that does not fit these templates may struggle to get published. 3. Leads to Superficial Understanding of Methods o Many researchers apply templates mechanically without understanding their underlying epistemological assumptions. o Example: Researchers using the Gioia method may produce a three- column data structure without fully engaging in grounded theory principles. Key Takeaway: Templates should be flexible tools rather than rigid formulas—researchers should adapt them to their unique research contexts. 3. What is Methodological Bricolage? The authors introduce methodological bricolage as an alternative to methodological templates. Definition: Bricolage is an iterative, creative process where researchers combine, adapt, and integrate multiple methodological moves based on their specific research question and data. Key Features of Bricolage: “Making do” with available resources → Researchers select and modify methods as needed. Combining different analytic moves → No one-size-fits-all approach. Emphasizing creativity and flexibility → Methods evolve throughout the research process. Maintaining methodological rigor → Despite flexibility, trustworthiness remains central. Example: Instead of strictly following the Gioia method, a researcher studying identity construction might: Use narrative analysis to explore how individuals tell their identity stories. Apply case study comparisons to contrast identity-building across different organizations. Use thematic coding to group emerging themes inductively. Why is Bricolage Important? Encourages methodological diversity. Allows better alignment between methods and research questions. Supports theoretical innovation by avoiding rigid methodological constraints. 4. Three Key Elements of Bricolage To establish trustworthiness in bricolage, researchers must ensure their approach exhibits: Element Description How to Achieve It? Demonstrating methodological Cite established methods and Competence expertise and justifying method explain why certain choices choices. were made. Ensuring coherence between Align sampling, data collection, Integrity research question, methods, and and analysis with research findings. objectives. Respecting participant voices and Show how theoretical insights Benevolence experiences. emerge from real-world data. Example: A researcher studying organizational routines might: Use ethnography to observe real-time interactions (Competence). Ensure findings reflect organizational dynamics over time (Integrity). Present direct quotes and rich narratives to maintain authenticity (Benevolence). 5. Methodological Bricolage in Practice The authors analyze three of their own studies to show how bricolage works in real research: Example 1: Sonenshein (2010) – Meaning Construction in Organizational Change Challenge: Needed to understand how managers and employees interpret change. Large-scale study with geographically dispersed employees. Bricolage Moves Used: Narrative Analysis → Constructed composite narratives from multiple sources. Thematic Analysis → Identified patterns in responses across different groups. Survey Analysis → Used short-answer surveys to capture employee perspectives. Outcome: Enabled a rich, contextualized understanding of how meaning evolves during change. Example 2: Pratt & Rosa (2003) – Work-Family Tensions in Network Marketing Challenge: Combining multiple qualitative datasets from different data sources. Bricolage Moves Used: Case Study Logic → Justified including multiple datasets. Ethnographic Methods → Used participant observation for deeper insights. Grounded Theory → Adjusted interview protocols as theoretical insights emerged. Outcome: Successfully combined multiple data sources to create a new theoretical model. Example 3: Feldman (2000) – Stability and Change in Organizational Routines Challenge: Initially focused on stability, but found unexpected changes in routines. Needed new analytical tools to explain findings. Bricolage Moves Used: Ethnographic Description → Provided rich, detailed case narratives. Meta-Theoretical Analysis → Used multiple theoretical lenses (ethnomethodology, semiotics, dramaturgy). Vignettes & Tables → Presented diverse forms of data. Outcome: Identified how stability and change coexist in routines. 6. Conclusion: Rethinking Rigor in Qualitative Research The authors argue that methodological bricolage provides a more flexible, yet rigorous, approach to qualitative research. Final Takeaways: Templates should guide, not dictate, research design. Bricolage allows researchers to tailor methods to their research questions. Maintaining trustworthiness (competence, integrity, benevolence) is key. Practical Implications: Researchers should justify their method choices. Reviewers should evaluate research based on theoretical contributions, not just adherence to templates. Qualitative research should remain methodologically diverse. Key Takeaways for Exam Preparation 1. What is the main critique of methodological templates? o They limit creativity, reduce theoretical innovation, and become rigid checklists. 2. What is methodological bricolage? o A flexible approach where researchers combine multiple methodological moves. 3. What are the three key elements of trustworthiness in bricolage? o Competence, Integrity, and Benevolence. 4. How can researchers apply bricolage in practice? o Use multiple analytic methods, adapt data collection strategies, and ensure coherence between research questions and findings. Final Exam Tip: Expect questions on: The problems with methodological templates. How bricolage enhances qualitative research. Examples of bricolage in practice. - On writing and visualization: Pratt, M. G. (2009). "For the Lack of a Boilerplate: Tips on Writing Up (and Reviewing) Qualitative Research." Academy of Management Journal, 52(5), 856-862. DOI: 10.5465/AMJ.2009.44632557 1. Purpose of the Article This editorial provides practical guidance for writing and reviewing qualitative research. Pratt argues that, unlike quantitative research, qualitative research lacks a standardized template ("boilerplate") for writing and evaluation, making publication in top-tier journals challenging. He discusses common pitfalls in qualitative writing and suggests best practices for crafting compelling, methodologically sound research. Key Question: How can qualitative researchers write effectively and increase their chances of publication in high-impact journals? 2. The Lack of a Boilerplate in Qualitative Research Why is there no standard template? 1. Diverse Methods & Epistemologies → Qualitative research includes ethnography, case studies, grounded theory, etc., each with different assumptions. 2. No Universal “Significance Level” → Unlike statistical methods, qualitative research lacks a single metric (e.g., p-values) for determining validity. 3. Context-Specific Standards → What counts as “enough data” depends on the research question, setting, and theoretical purpose. Implication: Without a clear standard, reviewers and authors struggle to assess the quality of qualitative work, leading to rejections due to unclear contributions. 3. Common Mistakes in Writing Qualitative Research Pratt identifies two major "dangerous paths" that weaken qualitative manuscripts: 1) Imbalance Between Theory and Data Too much telling, not enough showing → Authors summarize findings without including direct data (e.g., quotes, excerpts). Too much showing, not enough interpreting → Overloading the manuscript with raw data without clear theoretical analysis. Fix → Use a balance of direct data and interpretation to demonstrate how themes emerge from data. Example: A study on workplace identity should include direct participant quotes alongside a discussion of how the quotes illustrate broader theoretical insights. 2) Making Qualitative Research Look Quantitative Using deductive “short-hand” → Some authors write as if they are testing h