Document Details

NeatestPedalSteelGuitar3724

Uploaded by NeatestPedalSteelGuitar3724

Hasselt University

Tags

business research research methods applied research business management

Summary

These lecture notes provide an introduction to business research, defining it, differentiating between basic and applied research, and discussing the usefulness of internal or external research teams. The document also covers the characteristics of good scientific research, including purposiveness, rigor, and testability. The notes then cover various areas relevant to business research and the differences in research approaches (deductive vs. inductive).

Full Transcript

Lecture 1 - Chapter 1 - Introduction to research Definition of business research NOTES Difference between applied and basic research Why managers need to know about research Usefulness of internal or external research teams Business research: a definition Research = the process of...

Lecture 1 - Chapter 1 - Introduction to research Definition of business research NOTES Difference between applied and basic research Why managers need to know about research Usefulness of internal or external research teams Business research: a definition Research = the process of finding solutions to a problem after a targeted and systematic study and analysis of materials and sources. Business research = the systematic and organized effort to investigate a specific decision problem encountered in the work setting that needs a solution. = a series of steps executed to find data-based, critical, objective, and inquiry- or investigated into specific problems with the purpose of finding answers that is of concern to the manager in the work environment. 1. Identify the problem area in the organization ↓ 2. Research objectives and questions ↓ 3. Planning, collection, and analysis of data ↓ 4. The conclusion that helps manager deal with the problem situation What does business research data look like? Raw data → Numbers & Words Primary data → You go and collect them yourself Secondary data → Someone has already done them and the primary purpose is not for research → Newspapers, social media Commonly researched areas in business Issues that could need to be studied in business relate to Accounting and finance, management, marketing, HRM… Merges & acquisitions; risk assessment of foreign investments; executive compensation; product image; new product development; consumer decision-making; complaint handling; asset management; employee happiness; validation of performance appraisal system; rating errors in performance evaluation; acceptance of new IT programs Basic vs applied business research Basic / Fundamental: to generate a body of knowledge by trying to comprehend how certain problems occurring in organizations can be solved. E.g., causes and consequences of global warming. generating knowledge and understanding phenomena and problems that occur in various organizational settings. Applied: to solve a decision problem faced by the manager in the work setting, demanding a timely solution. E.g., global warming knowledge applied to the construction industry. solving currently experienced problems within a specific organization The relationship between basic and applied: “Though the goal of engaging in basic research is primarily to equip oneself with additional knowledge of certain phenomena and problems that occur in several organizations and industries with a view of finding solutions, the knowledge generated from such research is often applied for solving organizational problems.” 1 Applied only looks at 1 thing: a basic look at all companies and sectors. Who does basic/fundamental/pure research? Not the sole property of universities. (students and teachers) NOTES Many big companies also do basic research, either directly themselves or by funding research centers. BMW aims to further reduce the fleet’s greenhouse gas emissions by promoting electromobility innovations. Facebook studies online behavior and interactions to gain insights into how social and technological forces interact. Facebook invests in biomedical engineering basic research through the Chan Zuckerberg Foundation. Exercise: Companies are very interested in acquiring other firms, even when the latter operate in totally unrelated realms of business. For example, Coca-Cola has announced that it wants to buy China Huiyuan Juice Group in an effort to expand its activities in one of the world's fastest-growing beverage markets. Such acquisitions are claimed to "work miracles." However, given the volatility of the stock market and the slowing down of business, many companies are not sure whether such acquisitions involve too much risk. At the same time, they also wonder if they are missing out on a great business opportunity if they fail to take such a risk. Some research is needed here! Is this basic or applied research? This is a general issue; results can be useful and applied to all concerned companies. So it can be either basic or applied, depending on who sponsors the study. (e.g. in case this is a company to find an answer for immediate application > applied research) 2 Why do managers need research skills? A manager does not necessarily conduct any research but has to understand, predict, and control events that are NOTES dysfunctional within the organization. (e.g. a newly developed product that is not taking off) Minor problems may be fixed by the manager, but major problems require the hiring of outside researchers/consultants. Managers need the ability to interpret the results of published research that has addressed similar issues. Managers need to interact with hired researchers and consultants effectively. Managers need to convert research reports about their organization handed by professionals into actions. Managers need to understand the biases and limitations of research. Internal or external researchers? When a manager hires an external consultant, they need to make sure: Roles and expectations are clear for both parties from the start. (e.g., boundaries of research assignment, which information can be provided). The values and constraints of the organization are communicated. (e.g., gdpr, firing is not an option, …). Employees trust the consultant and will cooperate where needed. Internal researcher (could come from R&D department, mgmt service department, HR department…) a better chance of being readily accepted not considered as expert less time to understand the structure structured way of looking at things and no fresh ideas available to implement & evaluate the recommendations less money biases inside - Out Important arguments to choose for one or the other relate to: Price (cheaper) = Internal A Acceptance = Internal A Implementation of conclusions = Internal A Understanding of organizations = Internal A Organizational biases = Internal D New ways of thinking = External A Broad experience = External A Extensive training = External A Conflicts of interest = External D Lecture 2 - Chapter 2 - Hallmarks of good research and different research approaches From business research to -> good business research Research = process of finding solutions to a problem after targeted and systematic study and analysis of materials and sources. “Good” research = Scientific research ( not based solely on hunches, experience, and intuition, but purposive and rigorous) Should allow those interested in researching similar issues to come up with comparable findings when data are analyzed; Helps researchers state their findings with accuracy and confidence. Is “bad research” ever desirable? Yes, sometimes you just need, e.g., a simple descriptive inquiry about job satisfaction among workers; lack of time; or lack of knowledge. probability of making wrong decisions goes up when research quality goes down... Hallmarks or Main Distinguishing Characteristics: 3 Purposiveness, Rigor, Testability, Replicability, Precision and confidence, Objectivity, Generalizability, Parsimony Case example: A manager wants to know how employee commitment can be increased in their organization. Hallmark 1: Purposiveness The purpose of the research is clear. Case example: How can we turn this into "scientific" business research? In this case, the purpose is clear: the rise in commitment will lead to lower turnover, less absenteeism, and increased performance levels. Hallmark 2: Rigor A good theoretical base and sound methodological design help the researcher collect the right kind of info from the appropriate sample with minimum bias and facilitate suitable analysis of the data gathered. Case example: Is it enough for the manager to ask 10 employees what would increase their commitment and base conclusions hereon? The opinions of these few employees may not be representative of the workforce. The way of framing the research may have introduced bias in response. There may be other important reasons for commitment decreasing than the small. Hallmark 3: Testability Relates to lending itself to testing logically developed hypotheses (= tentative yet testable statements that predict what you expect to find in your empirical data) to see whether these are supported or not by collected data. Case example: The manager, after having studied existing research on the topic, develops certain hypotheses on how employee commitment can be enhanced (e.g., through more participation in decision-making). They then test this through (quantitative) data collected through a random selection of employees. Hallmark 4: Replicability Can the study be redone? => is made possible by the detailed description of the design details (e.g., sampling, data collection, etc.) of the study. Hallmark 5: Precision and Confidence Our findings can be “as close to reality as possible.”. Precision = closeness of findings to “reality” based on a sample; Degree of accuracy or exactitude of the results on the basis of this sample. Confidence = probability that our estimations are correct, for instance, that 95% of the time our results will be true (p = 0.05 confidence level). Case example: The predicted lost days due to absenteeism per year are 30-40, and the reality is 35. Hallmark 6: Objectivity Conclusions drawn are based on the facts of the findings, derived from actual data and not our own subjective or emotional values. Case example: If our initial hypothesis was that greater participation in decisions leads to more employee commitment, but our data do not support this claim, we should not continue recommending increased participation. (= based on the subjective opinion of the researcher, not the facts) 4 Hallmark 7: Generalizability N organizational Scope of applicability of the research findings in one OTES setting to other settings. Case example: Not many applied research findings can be generalized to other settings, situations, or organizations. Hallmark 8: Parsimony Simplicity is preferred to complex research frameworks with an unmanageable number of factors. Case example: If 3 variables, when changed, could raise commitment to 45%, it is better when 9 variables increase commitment to 48% (too hard for managers to control and change in daily reality). 5 Deductive vs inductive Deductive (scientific method): from more general theory to narrowing it down into more specific hypotheses we can test. We then collect specific observations to test these. We confirm or refute the original theory. Most often in causal and quantitative studies Inductive: we observe specific phenomena and, on the basis of this, arrive at general conclusions. More in exploratory and qualitative studies In many research: both theory generation (induction) and theory testing (deduction) Positivism Scientific research as a way to get to the truth ('yes, there is an objective truth out there) To understand, predict, and control the world The world operates by laws of cause and effect Importance of rigor, replicability, reliability, and generalizability Deductively testing a theory Favors use of experiments to test cause-effect relations through manipulation and observation. Knowledge of anything that is not directly observable/ objectively measurable (eg emotions, feelings, thoughts) is impossible. Constructionism Questions the objective truth => the world as we know it is fundamentally mentally constructed Aims to understand the rules people use to make sense of the world by investigating what happens in people’s minds How do people’s views of the world result from interactions with each other? Favors interviews, focus groups, ethnographies… Rich data to understand specific cases rather than interested in making generalizations of their findings “The scientific method” of Hypothetico-Deductive (Karl Popper) A scientific method developed in the context of natural sciences. Many objections to it yet still the systematic dominant approach for generating knowledge in social and business research through testing a theory. The seven steps in the Hypothetico-Deductive method are explained: 1. Observation: A problem is defined and research questions are developed. E.g. Issue of customer switching catches the manager's attention and leads to problem definition and RQ formulation. 2. Preliminary info gathering: Seeking information in depth of what is observed (what is happening & why?). E.g. This could be done by a literature review (literature on consumer switching) or by talking to people in the organization (why do they want to switch). 3. Theory formulation: Examining variables that could influence the problem and their network of associations, together with justification as to why they might influence the problem and how it can be solved. 4. Hypothesizing: The hypothesis must be testable and falsifiable (possible to disprove, remains provisional until it is disproved); we can only prove our hypothesis until they are disproved. 5. Data collection: This data forms the basis of data analysis. 6. Data analysis: Statistical analysis to see if hypotheses can be supported; E.g. correlation between employee unresponsiveness and customer switching. 7. Interpretation of the data: interpreting the meaning of the results to conclude if hypotheses are supported or not. 6 Is the hypotheco-deductive approach always feasible for business research? If a manager wants to know: NOTES % of the population that wants an iPhone—representative survey, simple descriptive = no cause or effect. How customers make a decision How competitors position their products in the market How satisfied employees are => The goal is not causal research Alternative approach: Critical realism Beliefs in external reality (objective truth) but rejects the claim that this can be measured objectively => observations that are not directly observable/measurable will always be subject to interpretation Critical of our ability to understand the world with certainty, yet we should try (=> data collection will always be flawed and imperfect.) Researchers are inherently biased Importance of triangulation across multiple flawed and erroneous methods/observations/researchers Alternative approach: Pragmatism The belief is that research on both objective, observable phenomena and subjective meanings can produce useful knowledge, depending on the problem studied. Different viewpoints on a problem are helpful to come to solutions. Endorses eclecticism and pluralism The current truth is tentative and may change over time The theory is applied from practice and then applied back to practice (=> intelligent practice) The value of research lies in its practical relevance; the value of theory is to inform practice. Lecture 2 - Chapter 3 - Defining The Management Problem Reasons to start an applied research project: 1. Proactively looking for decision opportunities or areas for improvement Information problem: more information on the situation is needed in order for the manager to make effective decisions E.g., new product intro; new market entry; market share monitoring; customer analysis... Calls for exploration 2. Fixing the existing situations that are broken - Reactive Action problem: discrepancy between the desired and actual state. E.g., staff turnover is 20%, which is > industry rate; low customer satisfaction; low market share. Calls for exploration and diagnosis Looking for decision opportunities or areas for improvement: Conducting research or not? Are the results potentially useful? Are there resources to implement the results? Are the attitudes of stakeholders towards the research okay? What are the costs and benefits of the research project? Exploration of an ‘information’ management problem What does the manager want to find out and why? - Describes the following: Existing situation: in order to grow, we are considering entering a new market but we are not sure how attractive the new market is… Why is the situation problematic? We need to make a market entry decision... The desired situation: obtaining insights into the long-term attractiveness of the market... 7 Exploration of an ‘action’ management problem What is happening? Why is it problematic? Common pitfalls: desire to achieve quick results; tunnel vision; confusing interpretations with facts Examples of action problems: Diagnosis of an ‘action’ management problem Different levels in the organization (team, individual, organization) Different tools to use (e.g. Titchy matrix; 5xWhy) Importance of correct problem identification Different types of action problems: Technical/routine problem: clarity on what the problem is and sufficient knowledge on how to solve depending on planning and plan of action. Information problem: clarity on the problem but no clear way to solve it. Doing more research on how to solve it. Research: action-oriented research. Consensus problems: sufficient knowledge on how to solve the problem but conflicting interest or different value systems and beliefs. Combination of information and consensus problem - no clear way to solve it and no agreement on the constraints that the solutions must meet. Defining the management problem requires a description of: The existing situation: sick leave is 12% Why is it problematic (change motive or research motive): It is costly and messes up the planning of activities The desired situation (mgmt. objective): to reduce sick leave to 6% Information Management problem Action Management Problem proactively looking for decision opportunities or fixing situations that are broken areas for improvement manager needs to solve the discrepancy between More information on the situation is needed in order desired and actual state. for the manager to make effective decisions calls for exploration and diagnosis calls for exploration Lecture 3 - Chapter 4 - Defining the research problem Narrowing down mgmt problem to research problem Developing a research problem (RO and RQs) The elements of a research proposal => Based on description of the “mgmt problem”, the “research problem” description will follow... 8 Defining the problem statement Includes both statement on: NOTES Research Objective (RO) Research Question(s) (RQ) why is the study being done what you want to learn about the topic to expand knowledge in case of basic/fundamental help you organize data research collection/analysis and attain research objectives to solve a problem in case of applied research can be related to specific literatures may be refined after the literature review likely refined after literature review questions In the early research process you alternate between preliminary research (a first review of the literature and contextual factors) and defining and redefining the problem statement. What is ‘preliminary research’? A first review of the literature and contextual factors Background info on the organization (or sector…) to be obtained through primary data (e.g. interviews) and/or secondary data (e.g. company records and archives) Topic info (textbooks, journal articles, conference proceedings) A clear and focused problem statement The research problem needs to be unambiguous, specific, and focused and addressed from a specific academic perspective. The ‘secret’ to bringing clarity and focus to your research problem is to isolate the key ideas in the first version of the Research Question.’ What are the subjects, verbs, and objects in the following statement? "To provide insight in why managers do not use the newly installed information system." Some examples: Research Objective (RO) Research Question(s) (RQ) To find out what motivates consumers to buy a product How has the new packaging affected the sales of online the product? To understand the causes of employee absence Does expansion of international operations result To determine the optimal price for a product in enhancement of the firm’s image and value? To establish the determinants of employee involvement How has the new advertising message resulted in To investigate the determinants between capital enhanced recall? structure and profitability of the firm How do price and quality rate on consumers’ Types of research questions evaluation of products? Exploratory RQs When: not much is known and/or not enough theory is available to guide the development of a theoretical framework; existing results are unclear or limited; the topic is highly complex; … Through qual approaches. broad then narrow. Exg., workers in a city in India and Americans. Descriptive RQs When: you want to obtain data that describes the topic of interest (e.g. % of the population that prefers Coca-Cola. over Pepsi; how do managers resolve conflicts) or describes a relationship between variables (e.g. what are common CSR strategies in the coffee industry). Through: mostly quant (satisfaction rating, demographic data, production figures), sometimes qual. Causal RQs When: to delineate factors that cause a problem (e.g. effect of teleworking on motivation?) Whether one variable causes another to change. Through: quant experimental designs. 9 1. Example of problem statement (applied research) Problem: CCA Airlines needs to better manage customer perceptions of waiting experience (during delays) RO: To identify factors influencing waiting experience AND investigate the impact of waiting on customer satisfaction/evaluations... RQs: What factors affect the perceived waiting experience for airline passengers? How does waiting affect service evaluations? How do situational factors (e.g., filled time) influence customer reactions to the waiting experience? 2. Example of problem statement (basic/fundamental research) Problem: Women make up 80% of the equestrian community at amateur level but only 20% at high competitive levels. RO: To understand the underrepresentation of women at high showjumping levels… RQs: How do equestrian professionals feel about sex integration in their sport domain? Why, according to equestrian professionals, are women largely absent from the top? What processes of subtle and/or overt gender discrimination exist in showjumping? When is a problem statement good? It needs to be unambiguous, specific, focused and addressed from a specific academic perspective. It also needs to be: Relevant (from managerial/academic/both perspective) when it relates to: a problem that currently exists in an organizational setting an area that a manager believes needs to be improved in the organization nothing is known about the topic much is known but knowledge is scattered or results are contradictory or established relationships do not hold in certain situations Feasible: you are able to investigate what you want investigated, given time/money constraints Interesting It should include statements from both RO and RQ. The research proposal Before research is undertaken, you need an agreement between the researcher and the person authorizing the study Contains the following key information: 1. A working title 2. Background of the study 3. The Mgmt problem 4. Research problem Purpose of the study and research questions 5. Scope of the study 6. Relevance of the study 7. Methodology used Type of study ( exploratory or descriptive) Data collection method Sampling design Data analysis 8. Timeframe - Duration of the study 9. Budget - Costs of the study 10 Selected Bibliography Example of a summary of research proposal -> 10 Lecture 4 - Chapter 5 – The literature Review Definition of literature review NOTES A literature review is “the selection of available documents (both published and unpublished) on the topic, which contains information, ideas, data and evidence written from a particular standpoint to fulfil certain aims or express certain views on the nature of the topic and how it is to be investigated, and the effective evaluation of these documents in relation to the research being proposed.” (Hart, 1998 , p. 13) A literature review is thus in essence meant to be critical. Which step in the process? -> Doing a literature review provides 1. Positioning: the research effort is positioned relative to existing knowledge and builds on this knowledge. 2. Novelty: You do not run the risk of “reinventing the wheel," that is, wasting effort on trying to rediscover something that is already known. 3. Contribution: The research effort can be contextualized in a wider academic debate. In other words, it allows you to relate your findings to the findings of others. 4. Clarity: You are able to introduce relevant terminology and define key terms used in your writing. This is important because the same term may have different meanings depending on the context. 5. Guidance: You obtain useful insights into the research methods that others have used to provide an answer to similar research questions. Different functions but always relevant Depending on the type of research, a literature review will have a specific function (Box 5.2) Descriptive study: Describe what is already known and can already be applied Inductive/Exploratory study: Find an argument for why an exploratory study is needed Deductive study Get a clear idea of which variables to measure and the expected relationships. The goal is to identify the objectives of the research to be done based on the review - > It is also the Conclusion (4) of how to do a critical literature review. Is a clear idea of what you want to do with your research? 11 How to do a critical literature review? Definition of literature review NOTES A literature review is “the (2)selection of (1)available documents (both published and unpublished) on the topic, which contains information, ideas, data and evidence written from a particular standpoint to fulfil certain aims or express certain views on the nature of the topic and how it is to be investigated, and the effective evaluation of these documents in relation to the research being proposed. 1. Available - The first step is to identify which materials (published and unpublished) are available. Possible data sources for materials 1. Textbooks = Thorough; less up-to-date 2. Journal publications = Peer-reviewed; recent development; usually one research 3. Thesis = Exhaustive lit. review; some research; Quality ≠ Journals 4. Conference proceedings = Latest research; Critical assessment needed 5. Unpublished manuscripts = In the flow of being published; peer-reviewed; very novel and up-to-date; * are non-academical sources 6. Reports = Usually goverment issued. Overview of market, industry, topic 7. Newspapers = Up-to-date business information Can be biased in nature 2. Selection - The second step is to select the relevant materials. How to evaluate the relevance of the materials? Possible approaches: Read the abstract (journal publications), the table of contents (books), or the summary (reports) first RADAR principle: see checklist Rationality - Why did the writer publish this ? Authority Date Appearance Relevance Check the impact of the material Journal impact factor Cited by … Evaluating the research means asking yourself Is the main research question or problem statement presented in a clear and analytical way? Is the relevance of the research question made transparent? Does this study build directly upon previous research? Will the study make a contribution to the field? Is there a theory that guides the research? Is the theory described relevant, and is it explained in an understandable, structured, and convincing manner? Are the methods used in the study explained in a clear manner? Is the choice of certain methods motivated in a convincing way? Is the sample appropriate? Are the research design and/or the questionnaire appropriate for this study? Are the measures of the variables valid and reliable? Has the author used the appropriate quantitative/qualitative techniques? Do the conclusions result from the findings of the study? Do the conclusions give a clear answer to the main research question? Has the author considered the limitations of the study? Has the author presented the limitations in the article? 12 What’s the final step? 3. The effective evaluation: Final step is to represent the literature you selected as relevant in such a manner that there is an effective evaluation A literature review is intended to synthesize ( summarize) To synthesize = combine two or more elements to make a new whole Referencing correctly! Style of referencing = APA + Appendix to Chapter 5 + Books, information on university websites, …. Two ethical issues 1) Purposely misrepresenting 2) Plagiarism (Box 5.3) ▪ Not cited ▪ Cited but still plagiarism Checklist RADAR (Mandalios, 2013) Rationality: Why did the writer publish this? To produce a balanced, well-researched, professional piece of work to add to the body of knowledge? Was it written as part of an ongoing debate to counter an opposing claim? Or is it for propaganda and bias? Or was it written in order to sell something? Or is it a spoof site, written for fun? Note: A biased or problematic site may still be useful to you; the key is to recognize its bias or limitations. Authority: Who is the author? (this may be a person or an organization) What tells you that they are authoritative? What are their credentials? Is the author well-known and respected? Does the author work for a reputable institution, e.g., a university, research center, or organization ( NASA)? Does the author have good qualifications and experience? What does the ‘About Us’ button tell you? Is other information available about them (e.g. from Google?) Does the URL of the site give you clues about authority? Does knowing the authority of the site help you make a judgment about the accuracy of the information? Even if you have doubts about the authority of the site, does it contain links to other authoritative or helpful sources? Date: When was the information published? Is the publication date important to you? Appearance: What clues can you get from the appearance of the source? Does the information look serious and professional? Does it have citations and references? Is it written in formal, academic language? Or does it look as if it was written by a non-professional? Does it look as if it was published for children? Or to sell something? Relevance: How is the information that you have found relevant to your assignment? 13 Lecture 5 - Chapter 6 - Theoretical framework and hypothesis development Definition of a theoretical framework NOTES A theoretical framework … represents one’s beliefs on how certain phenomena are related to each other and an explanation of why one believes that these variables are related to each other. is associated with deductive research, which aims to test the hypotheses after which the beliefs of the researcher are confirmed, refuted, or modified. Building a theoretical framework consists of the following steps 1. Introducing definitions of the concepts or variables in your model. 2. Developing a conceptual model that provides a descriptive representation of your theory (= a visual representation of the expected relationships). 3. Coming up with a theory that provides an explanation for relationships between the variables in your model. ⇒ End result: Development of testable hypotheses !The role of critical literature review lies in finding the variables, the expected relationships, and the theory to back it up 1. What is a variable? = what you measure in your research = which are part of your theoretical framework e.g. success of a new product of a company (in terms of sales), the stock market price of shares of that company,... Different types of variables 1. The dependent variable 2. The independent variable 3. The moderating variable 4. The mediating variable 1.1 - The dependent variable The goal of research is to understand and describe the dependent variable. In other words, to understand what makes it vary and to be able to predict it. (note: it is possible to have more than one dependent variable in a study) 1.2 - The independent variable = a variable that influences the dependent variable. In other words, the variation in the dependent variable is accounted for by the variation in the independent variable. the independent changes and causes effect to the dependant variable 3 conditions for proving a cause-effect relationship 1) The independent and dependent should covary 2) The independent proceeds the dependent 3) The research should control for the effects of “extraneous” (other things from outside) variables Exercises: what is the (in)dependent variable? - negative relationship one goes up, other goes down. = opposite + positive relationship, same direction 14 1.3 - The moderating variable = has a contingent effect on the independent-dependent relationship it modifies the original relationship, adds variance to the dependent Example If manuals (procedures) are available, then workers are able to produce more correct products, and less products are being rejected in the quality check. But that is contingent upon the worker actually being inclined to check the manual … if so then the effect is strengthened Exercises: what is the conceptual model? A manager finds that off-the-job classroom training has a great impact on the productivity of the employees in her department. However, she observes that employees over 60 years of age do not seem to derive as much benefit and do not improve with such training. (exercise 6.5) -> Do exercise 6.6 1.4 - The mediating variable = Surfaces between the time the independent variable started and the time the influence on the dependent variable is felt. It helps explain why the independent has an effect on the dependent. In other words, it is a step in between. If that step does not arise, then the effect on the dependent variable will not happen. Example A diverse workforce drives more organizational effectiveness. Further research proves that this is because, due to the diverse workforce, creative synergy emerges. It is that creative synergy that results in multifaceted expertise in problem solving and thus higher organizational effectiveness. Exercises: what is the conceptual model? Failure to follow accounting principles causes immense confusion, which in turn creates a number of problems for an organisation. Those with vast experience in bookkeeping, however, are able to avert the problems by taking timely corrective action. (exercise 6.8) 15 Exercises: what is the conceptual model? A store manager observes that the morale of employees in her supermarket is low. She thinks that if their working conditions are improved, pay scales raised and vacation benefits made more attractive, the morale will be boosted. She doubts, however, if an increase in pay would raise the morale of all employees in the same degree. She conjectures that those with additional income will be less boosted by higher pay. (exercise 6.9) Additionally: she only thinks that morale of employees will be increased if pay scales increase, when the employee is happy. 2. Tips: what is the conceptual model? You can have more than one independent, dependent, mediating, and moderating variable. There are hints of mediating (an event occurs before the dependent can occur; if an event occurs or not, there is no effect anymore on the dependent) and moderating (depending upon a certain aspect the relationship between dependent and independent will alter) variables. Label the variables correctly (not “raising pay scales” but “pay scale”), so: do not indicate the action but the subject Think through what a “+” and a “-” mean (“up and up” vs “up and down”) It is impossible to know all the variables that should be included in the conceptual model! Base you model thus on a good theoretical framework and a critical literature review. 3. An explanation for expected relationships Should be provided for all relationships that are theorized to exist among the variables. In applied research: You apply existing theories So: you theorize the relationship and the direction (positive, negative, … ) based on this previous knowledge In basic research: Not everything is known Try to have some theoretical foundation and/or clearly argue your case ⇒ Final step: the development of testable hypotheses. 16 What is a hypothesis? = tentative, yet testable, statement that predicts what you expect to find in your empirical data For example: The availability of a product will negatively influence its’ desirability. The degree of information at hand will positively influence the accuracy of forecasts. THESE ARE DIRECTIONAL HYPOTHESES smaller, larger, positive and negative What if you do not know the direction of the effect? NON DIRECTIONAL HYPOTHESIS For example: There is a relation between arousal-seeking tendency and consumer preferences for complex product designs. In reality, we have two hypotheses for each expected relationship Null hypothesis H0 stating there is no effect Alternative hypothesis H1 stating the effect The null hypothesis (no effect) is assumed to be true until statistical evidence indicates otherwise. In quantitative analysis we thus test whether or not we can reject the null hypothesis, thus supporting the alternative hypothesis of an effect. Note: for each relationship, there is a new set of hypotheses (H0 and HA), with A being replaced by the number of the relationship What is hypothesis testing? 1. You state the null and the alternative hypothesis. 2. You choose the appropriate statistical test and execute this. 3. You determine the level of significance desired (i.e., this determines whether or not you will reject the null hypothesis; it is the degree of confidence by which you will allow yourself to support the alternative hypothesis) 4. Look at the output of the executed test. Compare the found level of significance with the desired level of significance. If lower, you can reject the null hypothesis! Lecture 6 - Chapter 14 - Sampling Steps in sampling 1. Define the population 2. Determine the sampling frame 3. Determine the sampling design 4. Determine the appropriate sample size 5. Execute the sampling process Sampling is … the process of selecting the right individuals, objects, or events as representatives for the entire population. ⇒ The sample is smaller than the entire population ⇒ Why? Asking everyone is too time-consuming and costly 17 population = the entire group of people, events or things of interest that the researcher wishes to investigate. Element = a single member of the population What is a sample? = a subset of the population. By studying the sample the research aims and should be able to draw conclusions that are generalizable for the entire population. Subject = a single member of the sample What is sampling frame? = the subset of the population that you can actually include in your sample (e.g. e-mail survey: only those clients of which you have an email address can be invited) !! Always keep in mind the drawbacks of the sampling frame in terms of generalization and limitation of research What is a representative sample? = the subset of the population that reflects the population accurately ⇒ Is a miniature of the population (including important differences) !! Normality in data is presumed (normal distribution) What is a normal distribution? Mind the "non-response error” (most peope are centered around the mean) = those who did respond are different from those who did not (e.g., refusals of those in the sampling frame … ) Sampling design = But how do you choose who to ask … ? 18 Sample Design Example I have a class of 50 students, of two different masters (TEW in bleu and MoM in pink). I need to ask 10 students their opinion about the course. How do I select these 10 students? Which sampling technique do I use? Simple random Unrestricted, simple probability sampling (easiest) I simply draw 10 numbers by luck Are the two masters well-represented? What if I do not have an updated, completed list? Systematic I draw a start number, and then add 5 (why 5?) Stratified I divide the group in two strata, and then select randomly and proportionate for each stratum What if one group is very small? > Disproportionate Restricted, Homogeneity within vs Heterogeneity between groups? complex probability sampling Cluster I’m selecting two clusters of the 10 (= groups for group assignment) and ask all students in this cluster Are the two masters well-represented? Homogeneity within vs Heterogeneity between groups? Area a specific application of cluster sampling where the areas are countries, city blocks, … Double initially a sample is used to collect preliminary information and later a subsample of this primary sample is used to examine the matter in more detail. Note that for each stage in double sampling a technique to select subjects in that stage still needs to be decided. Convenience I ask the 10 students in the first row since that is most convenient for me. Cheap and easy, but how representative is this... Judgemental I judge myself who I will ask (who will be most positive). Quota I begin asking the students closest, and stop when I have asked 4 TEW’s and 6 MoM’s In essence proportionate, stratified sampling but … the ten are selected at convenience … representative? Snowball I decide who to ask first, and ask that person to give me the name of the next student I will ask Cheap and easy, but how representative is this … 19 Technique possible in case of interviews or not known Choosing the appropriate technique? Explain WHY! Generalizability vs Costs vs Time vs Population … But how many subjects are needed … ? Sample size The appropriate sample size = the number of subjects needed in the sample that is large is enough to say that they could represent the entire population. !! “Could” => size does not say anything about representing the population in terms of characteristics. Note that no sample statistic (e.g., the average satisfaction of a product) is going to be exactly the same as the population parameter. The characteristics of the population, such as the population mean, the population standard deviation and the population variance are referred to as its parameters. The needed sample size is a trade-off between 1) Precision = how close the sample statistic (e.g., the average satisfaction of a product) is to the true population parameter. We choose an interval around the found sample statistic in which we expect the true population parameter to lie. The narrower this interval, the greater the precision. This interval is represented in the standard error = function of the standard deviation of sample and sample size. More precision if a larger sample size and/or lower variability in sample A smaller sample size is needed if lower variability in the sample The more precision needed requires a larger sample 2) Confidence = how certain we are that the sample statistic (e.g., the average satisfaction of a product) reflects the true population parameter We choose the degree of confidence we desire. A 95% confidence is conventionally accepted. What does that mean? That we are 95% certain that the sample statistic is indeed a good reflection of what the population thinks. Based on a desired level of confidence, we can calculate the needed sample size (given a certain degree of precision we also choose) Given a collected sample size and a known population size, we can calculate what the level of confidence and 20 level of precision is. Sample size > Trade-off between precision and confidence A sample statistic has a normal distribution. This means that the answers of the subjects of the sample are distributed around the calculated mean in a bell shaped manner. Thus: most answers round the mean, and few answers at both extremes. Interval of precision = a 50% confidence level More precise, but less confident Interval of precision = b 99% confidence level Less precise, but more confident Sample size > Considerations The appropriate sample size is thus based on 1) How much precision do we really need? What is the margin of the error that we allow? 2) How much confidence do we really need? How much chance do we take that we are truly reflecting what the population thinks? 3) To what extent is there variability in the population/sample? 4) What is the cost/benefit of increasing the sample size? Sample size > Rules of thumbs and other aids Roscoe (1975) proposes the following rules of thumb for determining sample size: 1. larger than 30 and less than 500 2. If subsamples (e.g., juniors/seniors) a minimum sample size of 30 for each subsample is necessary. 3. In multivariate research (e.g., multiple regression analyses), the sample size should be several times (preferably ten times or more) as large as the number of variables in the study. 4. For simple experimental research with tight experimental controls successful research is possible with samples as small as 10 to 20 Table 14.3 for 95% confidence level 21 Lecture 7 - Chapter 7 - Elements of the research By now you are able to: ▪ Define the mgmt problem ▪ Define the research problem ▪ Develop research proposal ▪ Conduct a critical review of the literature ▪ Document your literature review ▪ Develop a theoretical framework & hypotheses* => Next up: design the research so that needed data can be collected & analyzed to answer RQs and find solution for the problem. Research design = the blueprint or plan for the collection, measurement and analysis of data, created to answer your research empirical questions. Choices have to be made between alternatives taking into account limited time/money/data… Graphic overview: elements of research design Research strategies Will help you achieve research goal (meet RO and answer RQs) Choice will depend on: RO & RQs practical aspects like access to data and available time own opinion of ‘good research’ Possibilities discussed in the book: A. Experiments B. Survey Research C. Observation D. Case studies E. Grounded theory F. Action research 22 G. Mixed Methods A. Experiments For causal research and/or hypo-deductive approach For testing causal relationships IV is manipulated to see effect on DP E.g. adapt reward system to see effect on productivity Lab experiments vs Field experiments (ch11) Not always feasible in applied research context > e.g. effect of work stress on personal relationships (> could become ethically problematic ~ Milgram/Stanford prison experiments) B. Survey research Survey = system for collecting information from or about people to describe, compare, explain their knowledge, attitudes, behaviour Popular Allows collection of both qualitative and quantitative data For exploratory and descriptive purpose E.g. consumer satisfaction, job satisfaction, use of social media, … Ethnography Has its roots in anthropology (see for instance work of Margaret Mead, 60s) Strategy is which researcher closely observes, records and engages in the daily life of another culture and then writes accounts of this culture, emphasizing descriptive detail. Closely related to participant observation E.g. to study the culture of London bankers PhD research based on etnography Understanding a social phenomenon by being part of it for a long period of time. Developed within cultural anthropology Emphasizes the culture of groups, organizations, communities. Researchers then live for weeks or months in the setting they are investigating. Pay attention to symbols and rituals that people use to give meaning to their behavior. Eg. inclusion/exclusion of aircraft cleaning companies D. Case studies = focus on collecting info about business unit or organization In-depth examination of real life situation Case is individual, group, situation, organization, … Studying 1 phenomenon using multiple methods of data collection (to ‘triangulate’) E. Grounded theory = systematic set of procedures to develop an inductively derived theory from the data (Glazer & Strauss, ‘67) Constant comparing of data against theory If there is bad fit between data (e.g. between interviews) or between data and theory, then categories and theories have to be modified until categories and theory fit the data… 23 Origins of grounded theory Grounded in empirical oservations. In rare occasions it ends with discovery of new theory, more often it does however not. Roots: Glaser & Strauss 1967, Awareness of dying study two years before (to test relation between interaction with dying patient and their awareness of dying) Since original work this developed strongly based on work of others such as Strauss & Corbin, Charmaz… F. Action research = aimed at effecting planned change interplay of problem solution effects and new solution SalkTurbo The goal of this project is to contribute to a more inclusive labour market in Limburg through the development of a methodology focused on supporting inactive people and employers to adjust individual, job and organizational environment to reach sustainable employment. A service blueprint will be developed outlying the steps, actors, support and methods that will ensure a good guidance in this process. Extent of researcher interference To what extent is the study manipulated? Minimal Moderate Excessive Is the study correlational or causal? Minimal researcher interference For correlational purposes e.g. a study on factors affecting training effectiveness e.g. a study on relationship between emotional support and stress experienced by nurses descriptive, in natural environment minimal interference (e.g. just administering some questionnaires, but no disturbance in the routine functioning) Moderate researcher interference For causal purposes e.g. a study on effect of lightening on performance - The Hawthrone studies e.g. study demonstrating that if nurses have more emotional support, stress is reduced significantly to study cause-effect relations researcher deliberately changes variables in setting and interferes with events as they normally occur But: what with extraneous factors? (e.g. whether or not ‘stressful’ cases come into the hospital during experimental week) Excessive researcher interference For causal purposes ‘beyond any doubt’ e.g. study demonstrating that if nurses have more emotional support, stress is reduced significantly in laboratory (completely artificial setting) to rule out extraneous factors Three groups of medical students in different rooms and confronted with same stressful tasks (e.g. the come up with a complicated treatment) Without support - A little support - Full support => Support is manipulated in a laboratory setting 24 Exercise: level of researcher interference Problem: Women make up 80% of equestrian community at amateur level but only 20% at high competitive levels. RO: To understand underrepresentation of women at high showjumping evels… RQs: How do equestrian professionals feel about the sex integration in their sport domain? Why, according to equestrian professionals, are women largely absent from the top? What processes of subtle and/or overt gender discrimination exist in showjumping? Minimal Study setting Where will the study take place (location)? Non-contrived: the natural environment where work proceeds normally for exploratory and descriptive (correlational) studies field studies field experiment: study conducted in the natural setting to find cause-and-effects relationships, interference of the researcher (manipulation of variables) Contrived: artificial setting, manipulated for cause and effect studies lab experiment Field experiment vs field study Field experiment = studies conducted to establish cause-and-effect relationship using the same natural environment in which employees normally function. Field study = Correlational studies done in organizations. Qualitative study = Always in the field, example master thesis students working on “Accessibility of coworking spaces from the perspective of disabled customers” Unit of analysis ▪ At what level will your data be analyzed? ▪ Unit of analysis = level of aggregation of the data collected during the subsequent data analysis stage Individuals: e.g. how to raise motivational levels of employees; OR how many % of staff is interested in doing a social media workshop Dyads: e.g. supervisor-subordinate interactions; OR what are perceived benefits of mentor-mentee relationship Groups: e.g. comparison between effectiveness of two departments; OR what are patterns of usage of new feedback-giving app in different departments of the organization Divisions: e.g. which divisions (e.g. soap, oil, butter, beverages, …) have made a +12% profit last year? Organizations: e.g. the effect of organizational culture on hiring minority workers Cultures: e.g. cultural differences in fragrance consumption ▪ The RQ determines the appropriate unit of analysis: E.g. what is the effect of group diversity on the difficulty, quality and creativity of decision-making E.g. what is the effect of a country’s collectivism vs individualism on consumption patterns of cosmetics? Time horizon: cross-sectional ▪ Snapshot of constructs at a single point in time ▪ Use of representative sample ▪ One shot studies ▪ E.g. interviews were gathered over a period of 7 months 25 Time horizon: longitudinal ▪ Constructs measured at multiple points in time ▪ Use of same sample = a true panel ▪ E.g. employee motivation under ceo A and under ceo B ▪ Experimental design Lecture 8 - Chapter 10 - Quantitative data collection (through questionnaires) A questionnaire is … a preformulated written set of questions to which respondents record their answers, usually within rather closely defined alternatives. Designed to collect from a large sample Designed to collect quantitative data (= numbers) Designed to be able to do statistical analysis Questionnaire data collection method A questionnaire can be administered by 1) Mail 2) Electronic 3) Personally Each method has its’ own advantages and disadvantages, such as response rate, the time needed, and resources needed... Questionnaire design Designing the questions for a questionnaire involves thinking about 1) Principles of measurement 2) Principles of wording Questionnaire design > measurement principles You should consider the following (1) Specify and define the phenomena you are interested in = determine which questions you need to ask in order to be able to answer your research questions and/or test your hypotheses (2) Determine which instruments you can use to measure this accurately (3) Think about the analytical use of your questions. E.g., age as a number or age as a category (4) Specify respondents’ tasks. E.g., mandatory, extra input needed, special type of question (5) Assess the quality (reliability and validity) of your instruments Reliable = consistent = free of random errors but there could be systematic consistent error(s) Valid = observed value reflects real value = free of random and systematic errors Example: Let’s assume you have quadruplets. Every child weighs exactly 3 kilos Now you weigh each child. You have three scales (A, B, and C) Scale A is not reliable or valid. Scale B will always reliably measure 500 grams too many... Scale C is both reliable and valid. 26 Questionnaire design > Wording principles You should consider the following: (1) The appropriateness of the content of the questions The purpose of each question needs to be considered. e.g., why do you need to know what their monthly income is? The answer that could be provided should not be ambiguous = Does the question you ask provide you with the needed answer? e.g., I’m checking your French speaking skills by asking you to translate the words "football," "referee,” and "offside.”? No, you’re testing my knowledge of French football vocabulary skills e.g., “To what extent are you happy?” => Today? In general? In my workplace? (2) How questions are worded and the level of sophistication of the language used Do not be blind for your own expertise and commonly used language! (3) The type and form of the questions asked Open versus closed questions: Positively versus negative questions It is advisable to include some negatively worded questions so the tendency to mechanically choose the same option for each question is minimized. Double-barrelled questions You should avoid double barrelled questions, which are questions where two or more seperate things are asked. “What if I’m positive about one, but negative about other?” Recall-dependent questions The ability to answer the question, should not be dependent upon the recall (memory) capacity of the participant. Leading questions The participant should be free to express their own opinion, without being lead to what the researchers want them to answer. Loaded questions The researcher should avoid using loaded words (e.g., emotionallycharged, aggresive wording, over-enthousiast words … ) that would negatively or positively influence the participants’ answer. 27 Social desirability The participant is not lead to what the right answer should be, but a researcher should also be attentive to not ask questions in such a way that participants feel the urge to answer what is socially acceptable. Closed question types* Different types of closed questions exist. What type is appropriate is dependent upon what you want to know (4) The sequencing of the questions General rule: from general to specific, from easy-to-answer to more difficult. Why? Investment of time and building-up confidence Personal questions - Open-ended question | - Better at the end Comment section - (5) The personal data sought from the respondents Is it needed? Is it appropriately asked? Are the answer options appropriate? 28 Questionnaire set-up Once you have the questions, you can start by setting up your questionnaire to be administrated (= send out to collect data). This involves 1) Writing an introduction 2) Organizing the questions, including layout, sequence, skip logic, … 3) Writing a conclusion Note: depending on data collection method, you may also need to write an invitation to participate in the survey. Introduction Questions Conclusion Questionnaire testing So your questionnaire is ready to be administrated? NO! Testing of your questionnaire involves 1) Review of questionnaire design ▪ Does everything work (e.g., skipped questions, response required, links, …)? ▪ Is everything well visible and readable (e.g., on a mobile phone)? ▪ How long does a participant need to fill out the questionnaire? 2) Review of questions ▪ Are questions comprehensible? ▪ Are questions appropriate? 3) Don’t forget to also test your invitation, introduction, and conclusion! 29 Some final remarks: Special issues in cross-cultural research The questionnaire is administered in different languages. ⇒ Do the questions measure the same variable? ⇒ Possibility to detect this is via back translation. in multimethod data collection The questionnaire is administered via multiple methods (e.g., personal and email) ⇒ Multimethod may prove to be interesting to diminish possible disadvantages (e.g., if extra explanation is required to understand a question), but can also introduce bias (e.g., by inconsistent explanation) Some final remarks: Ethics If anonymity is guaranteed, it is guaranteed! Confidentiality and respect for privacy are fundamental. Respect for the participant is given by not asking intrusive questions and not soliciting their participation if unwanted (e.g., forcing participation, spamming,..) Lecture 9/10 - Chapter 11 - Experiments An experimental design is set up to examine cause-and-effect relationships among variables. The cause = the independent variable = what you change (manipulate) in the experiment The effect = the dependent variable = what you measure as an outcome of the experiment Three conditions for cause-effect proof 1) Independent and dependent should covary 2) The independent should proceed with the dependent 3) The researcher should control for extraneous variables so that one can say that independent variable X, and independent variable X alone, causes the effect in dependent variable Y. Terminology of experimental design Control When we want to be sure that only X causes Y, then we need to control for the extraneous variables. Variables that could also explain variation in Y Also called nuisance variables Possible techniques to control them 1) Matching = picking the confounding variables and deliberately spreading them across groups 2) Randomization = randomly assigning participants to the groups so that extraneous variables have an equal probability of being distributed among the groups Note (not in book): you may also want to measure extraneous variables and include them in your analysis Terminology of experimental design Manipulation We create different levels of the independent variable and assess the impact (effect) on the dependent variable 30 e.g., Effect on employee productivity when employees are given 2 x 30 min break versus 4 x 15 min break Treatment The manipulation of the independent variable is the treatment. The results of the treatment are called the treatment effect. The treatment thus creates the groups/situations/conditions where we compare what the effect on the dependent variable is depending on which level of the independent variable. !! (most) experiments have a control group (= no treatment) Internal validity = confidence we place in the cause-and-effect relationship “To what extent does the research design permit us to say that the independent variable X causes a change in the dependent variable Y.” The more control over the influence of extraneous variables, the more internal validity. External validity = the extent of generalizability of the results “To what extent will the same effects be found with other people, events, settings, …” Validity in experiments > choosing the type of experiment ------> Validity in experiments > Threats Aside extraneous variables, there are other threats to validity (1) History Effects Internal Validity Threat (2) Maturation effects IVT (3) Testing effects External validity threat (4) Selection bias affects EVT (5) Mortality effects IVT (6) Statistical regression effects IVT (7) Instrumentation effects IVT !! In each experimental design a researcher needs to try and tackle these threats. Either before the experiment, during or by acknowledging them in the analysis and discussion of results. (1) History effects = Executing an experiment takes time There is some time between the manipulation (= treating the subjects to the level of the independent variable) and the measurement of the effect on the cause (dependent variable). E.g., exposing your employees to either a 5-day intense training program or a five-week one-day-a-week training program to measure the effect on their problem-solving skills after the training. In the meantime, however, events may occur … E.g., two employees go in their free time to a lecture on creative thinking. Quid: How can you be certain that the effect in problem solving skills of these two employees are due to the training program? (2) Maturation effects = Executing an experiment takes time There is some time between the manipulation (= treating the subjects to the level of the independent variable) and the measurement of the effect on the cause (dependent variable). E.g., exposing your employees to either a 5-day intense training program or a five-week one-day a week training program, in order to measure the effect on their problem solving skills after the training In the meantime, however, things may occur … Maturation effects = effect due to passage of time (not a specific event!) e.g., getting tired, feeling hungry, getting bored So, maybe the intense program is less effective because of the intensity? Another example -> 31 (3) Testing effects pre-test = when the researcher wishes to measure the dependent variable before the manipulation after the manipulation, the dependent variable is measured again (post-test) There may be a main testing effect because a participant wants to be consistent; it thus affects the level of the dependent variable post-test = “What did I say before?” there may be an interactive testing effect when the pretest affects the participants’ reaction to the treatment (manipulation) “they asked me what I thought of that brand, so I’m going to be more attentive to the commercial of that brand.” (4) Selection bias effects People will be recruited to participate in the experiment In a lab setting, the type of participant may be very different from the actual population (although they may be a part of the population at this moment or in the future). E.g., students In a field setting, participants may have been motivated by other reasons, such as the incentive, the fact that the boss or head office knows how many or which employees participated,... => Selection bias effects may thus arise due to the selection process itself Note: this is also the case when your sampling frame is limited (e.g., only clients of which you have the email address may not represent all the clients in your population) (5) Mortality effects The group composition changes over time due to people dropping out of the experiment Note: is primarily of concern for experiments spread over time and/or clinical trials (6) Statistical regression effects When the participants are those that have extreme scores on the dependent variable, then they will not truly reflect the cause-and-effect relationship. E.g., a sensory study primarily including highly sensitive participants will not truly reflect the experience of all people (7) Instrumentation effects When the instrument by which you measure is changed between pre- and post-test, or during the experiment. e.g., when the pre-test measurement is done by the researcher and the post-test measurement is done by different managers Experimental designs 32 Experimental designs > Quasi experimental Pre- and post-test experimental group design 1 experimental group Measurement of dependent: pre- and post-test =Treatment effect = O 2 - O 1 ? Who says that effect comes from treatment? Post only with experimental and control groups 2 groups (experimental and control) Measurement of the dependent-only post =Treatment effect = O 2 - O 1 ? What was the actual effect taking into account the start value? Time series Collection of data on the same variable at regular intervals Threats: History, testing, mortality and maturation effects! Experimental designs > True experimental Pre- and post-test experimental and control groups 2 groups (experimental and control) Measurement of dependent pre- and post-test Treatment effect = (O 2 - O 1 ) - (O 4 – O 3 ) ? Maybe there is a testing effect ⇒ Solution = Solomon Four-Group Design Solomon Four-Group Design 2 groups (experimental and control) Measurement of dependent pre- and post-test Treatment effect can be calculated in several different ways to investigate the true cause-effect relationship controlling for the threats Experimental designs > specific situations Double-blinded studies Both the experimenter and the subjects are blinded as to whether or not and/or if the treatment was given. This is most popular in clinical trials. Ex post facto designs There is no “deliberate” manipulation of the independent variable (because of too far back in time, not possible,....), but some participants have been exposed while others have not. Their level for the dependent variable is examined. e.g., a training program 2 years ago: comparison of leadership skills of those who attended the program and those who did not. Simulation Aimed at studying real-world phenomena in a more controlled environment. Not directed at specific cause-effect relationships, but rather at studying how the system as a whole works. 33 Experimental designs > Factorial designs (more than one varaiable) Research question Suppose that we want to examine the effect of service outcome and service process in an online setting on intentions to shop again at a particular website. Independent variables Service outcome (positive vs. negative) Service process (favorable vs. unfavorable) This leads to four different treatments (2*2) Dependent variables All respondents were given a questionnaire after the treatment to measure their satisfaction, enjoyment, and behavioral intentions Research question -> Main effect = the effect of the factor on the outcome while ignoring the effect of the other experimental factors For every experimental factor there is a main effect Main effect of “service process” effect service process has on the dependent variable, irrespective of the service outcome experienced Main effect of “service outcome” effect of service outcome on the dependent variable, irrespective of the service process experienced Interaction effect An interaction effect is present when the effect of a factor is not constant over all levels of the other factor An interaction effect means that the main effect of a factor varies with the different levels of the other factor (can be attenuated or reinforced). In contrast, if there is no interaction between the factors that means that the difference in cell means caused by a factor is constant over all levels of the other factor The presence of an interaction effects means that you have to be careful when interpreting the main effects (“depends on”) 34 Other examples A 3*3 factorial design Note: information on this slide is not in the chapter A 2*2*2 factorial design A 4*2 factorial design 35 Lecture 11 - chapter 8 and 9 - Qualitative data collection (focus on interviews and observations) (Un)obtrusive methods = data obtained without the need for the researcher to interact with the people under study Examples in qual/quant research: ▪ Data obtained through internet clickstreams ▪ Scanner data ▪ Soft drink cans found in trash bags ▪ Document analysis Primary vs secondary qualitative data collection 1. Primary: interviews, observations = information obtained first hand by the researcher on the variables of interest for the specific purpose of the study. e.g.: individuals, focus groups, panel study 2. Secondary data: = information gathered from sources already existing. e.g.: company records or archives, government publications, industry analyses offered by the media, websites, the Internet, and so on. => often quant + qual analysis Question to you If I want to investigate the experience of “crypto babes” through talks/chats on clubhouse, what data am I using? Is a ‘literature review’ a form of empirical document analysis? What about a ‘systematic’, ‘integrative’ or ‘scoping’ literature review? Social media offer an untapped source of secondary data Interviews - Chapter 8 Face-2-face vs telephone/computer Face 2 face: preferred as allows picking up nonverbal cues better, but costly in case respondents are geographically spread Telephone interviews: respondents may feel more comfortable/anonymous Teams, Zoom, and Google Meet interviews: booming business since the COVID-19 pandemic Group interviews Focus groups: 8 to 10 members with a moderator leading discussion; to get impressions, views,... about event/product/service/concept For exploratory studies (e.g. how can the organization support young parents better?) To help interpret the results of a survey (mixed method), Expert panels: to elicit expert knowledge and opinions, may be repeated over time. Unstructured interviews The interviewer does not enter the interview setting with a planned sequence of questions to be asked of the respondent. Begin with open questions, then ask some more detailed questions For example, on diversity-related research: the reviewer asks only 2 questions: Describe a situation in which you felt really included in society Describe one where you felt really excluded For more experienced researchers When interviewing informally during an ethnographic study 36 Semi-structured interviews When known from the outset what information is needed Asks the same question to everyone but leaves open the possibility of taking a lead from the respondent’s answer, asking additional questions Prepared in advance (‘interview guide or protocol’): Intro (of interviewer, of interview purpose, confidentiality info, permission to record) Set of topics in logical order: easy warming-up questions and then main questions When to stop? When new information is no longer obtained... Can include visual aids (especially important when doing marketing interviews with kids or to elicit difficult-to- express ideas) or creative probing techniques Training your interviewers When a large number of interviews need to be conducted Need for briefing: How to start the interview How to proceed with the questions How to motivate respondents to answer What to look for in answers How to close the interview Tips for interviewing Example of ‘probing technique’ To better understand the relationship between dog-disabled owners in the workplace, interviewees were asked a number of “complete the following sentence” questions: My dog is my… Without my dog… Example of ‘visual aids’ To better understand the way stand-up comedians see themselves/their body, we ask them to make a drawing of themselves on stage Avoiding bias in interviewing Interviewer bias: ▪ lack of trust; ▪ misinterpretation of responses; ▪ distorted responses due to steering of answers through body language Interviewee bias: ▪ when they answer in a socially accepted manner; ▪ misunderstand the question; ▪ feel hesitant to ask clarification of the question Situation bias: ▪ non-participation: who declined the invitation to participate and why? ▪ trust levels and rapport: varying degrees of openness ▪ Physical setting: unease with being interviewed in the workplace bias = errors or inaccuracies in collected data 37 Creating trust in interviewing Listen actively and show interest State clearly the purpose of the interview and assure complete confidentiality Make sure individuals understand you will only share results with the organization in aggregates, without disclosing the identities of individuals Make it clear you are not on the “management side”, and explain how respondents too can ‘gain’ from the research (e.g. improving employee wellbeing, improving diversity mgmt policies, …) Techniques during interviewing ▪ Funneling: start with broad questions (e.g. can you take me through your career trajectory?) then follow up with more focused ones (e.g. how do you feel about your current remuneration package?) ▪ Unbiased questions: ask questions in a ‘neutral way’ to avoid steering the answers in certain directions (e.g. Pfieuw that must be boring stuff, huh?) ▪ Paraphrasing and clarifying: to help your interviewee in case they struggle to answer, or in case you don’t understand. paraphrase the question into a simpler one, … “so are you saying that…” “I did not quite understand” leave no things unclear/unsaid! ask for examples ▪ Other tactics: silence, repeating answer, … ▪ Record & take notes: first ask permission, explain how recording will be used and later destroyed; incase no permission is obtained take extensive notes and document after interviewll that is remembered. Observation - Chapter 9 Defining observation = planned watching, recording, analysis and interpretation of behavior, actions or events. involves going into “the field” – the factory, the supermarket, the waiting room, the office or the trading room – watching what workers - or consumers or day traders… - do, and describing, analyzing and interpreting what one has seen. Examples: Shadowing a wall street broker engaged in her/his daily routine Observing in-store shopping behavior of consumers through camera Sitting in corner of an office to observe shared leadership practices Studying customer approach of sales people via undercover researcher Characteristics of observation Excellent for research requiring descriptive data that is not self-reported Data are uncontaminated by self-report But time-consuming… Control and participation Controlled vs uncontrolled = artificial vs natural setting (e.g. simulated store environment vs actual store) Uncontrolled: we get to watch people in their natural environment Controlled: helps untangle a complex situation as we can control certain things (e.g. layout of the store) Participant vs non-participant = researcher is never directly involved in the actions of actors vs participates in the daily life of the organization Passive: sit in the corner of the office and watch Moderate: occasional interaction with the group Active: engage in almost everything the group does Complete: immersion, getting the inside view 38 Zoom-in on Participant observation To really understand the nature of phenomena To grasp the native’s point of view, her/his relation to life, to realize her/his vision of the world From pure observation (‘bystander role’) to ‘going native’ (= researcher becomes so involved that eventually every objectivity and research interest is lost) Shadowing: closely following a research subject as they engage in their daily activities. Structure and concealment Structured vs unstructured = is there a predetermined set of categories of activities or phenomena planned to be studied? E.g. task-related behavior, perceived emotions, non/verbal communication,… => all record in fieldnotes/worksheets Structured is often quantitative in nature > how long does it take to get food in a restaurant? Unstructured: the researchers records practically everything that is observed A hallmark of qualitative research This may lead to a set of tentative hypotheses to be tested later Concealed vs unconcealed = do members of the social group under study know they are being investigated? Concealed +: subjects are not influenced by their awareness of being studied (=reactivity, see Hawthorne effect) ( when unconcealed the authenticity of behavior under study might be endangered.) Concealed -: ethical issues with ‘mystery shoppers’ due to absence of informed consent, privacy and confidentiality Zoom in on Structured observation Looks selectively at predetermined phenomena Fragmented into small and manageable pieces of information (e.g. info on behavior, actions, interactions, events) From highly structured to semi-structured... Mystery shoppers = trained researcher who accurately records employee behavior using checklists and codes to gather info on service performance. (e.g. fast-food restaurants to monitor service quality) Importance of coding scheme Focus: what is to be observed? Objective: coding scheme should require little inference or interpretation from rthe esearcher Easy to use Mutually exclusive (no overlap between categories) and collectively exhaustive (covers all possibilities) Frequencies (e.g. how often does the manager make a phone call > simple checklist) and/or circumstances (e.g. timing of phone calls > sequence record on timescale) > see fig 9.1 Quantitative data is the result Observation in a virtual field The field can also be a virtual/online/digital field (e.g., How do consumers in online communities persuade each other? Example of a recent study based on women’s stand-up comedy recordings in a comedy café: How do women stand-up comedians gender themselves in their performance?) But becomes a thin line between observation and document analysis... And from the Egos conference stream on ethnography (Amsterdam, 2021) ▪ Ethnography on the trading floor in Turkey ▪ Affective ethnography and use of humor in a hospice where they give end-of-life palliative care ▪ Ethnography of office plants ▪ Ethnography from the perspective of oil tankers 39 Concluding remarks Lecture 12 - Chapters 15 and 16 - Quantitative data analysis Quantitative data analysis 1. Getting the data ready for analysis 2. Getting a feel of the data 3. Hypothesis testing 1. Getting the data ready Once you have collected your data, the first step is to make the data ready for analysis Steps to be taken 1.1) Data entry 1.2) Editing data 1.3) Data transformation ! Getting the data ready is very important. After this step, there can be no doubt about the integrity of the data. Be accurate and consistent! 1.1 Data Entry Once you have collected your data, the first step is to make the data ready for analysis. You will need a statistical program such as SPSS where you either input the data yourself or the data is automatically inputted via the data collection tool. 40 So what do I enter in the data view? We call this “coding of the data”. First column = identifier for participant = subject ID Then you start with the data: Q1-Q22 are statements You thus input the number chosen by the participant So for Q1 “1” For Q23–28, you will have to attribute a number to the possible answers There is no right or wrong, only consistency! What if a certain question has not been answered? = missing value You have two options 1) Leave the cell blank 2) Insert a specific chosen number to represent a missing value Each option is fine, but be consistent and define how SPSS can know what a missing value is … ? The variable view = information on the variables (variable in row, information in columns) What is advisory to input 1. Name (= link to question) 2. Type (= measurement level) 3. Label (= name in output) 4. Values (= what each number means, e.g. 1 = “totally disagree”) 5. If needed: indicate your missing value label 1.2. Editing data = changing the data because the inputted data is "incorrect.” Situation 1: Outliers Quickly detectable with minimum and maximum But how do we know it is indeed an outlier... and not just the honest answer of the participant... Situation 2: Inconsistent response Needs to be checked at the participants’ level But how do we know it is indeed inconsistent … Situation 3: Illegal codes (mistake at data entry) Can be checked by minimum and maximum Needs to be corrected (this is where your subject ID is handy) Situation 4: Missing values You may want to (or need to for analysis purposes) have this data... What could you do? 1) Ask the participant afterward 2) Ignore this participant for the analysis of that variable 3) Replace it with a logical response following the answer patterns of the participant 4) Replace it with the mean But again, is this justifiable? 41 1.3. data transformation = changing the original data to another value Most common situation: Reverse scoring -> Statement 3 is deliberately asked reversed (remember lect. on questionnaire?) So, if we want to calculate an average opinion for all 5 statements, we need to change the number we inputted for statement 3: 1 => 5; 2 => 4; 3 => 3; 4 => 2; 5 => 1 You can do this easily via the function Transform, but pay attention …. 2. Getting a feel of the data = analyzing your data, not to be able to answer research questions or test hypotheses, but to understand what your data looks like. Different possibilities 2. 1) Measures of the central tendency of a single variable 2.2 ) Measures of dispersion of a single variable 2.3) Visual representation of a single variable 2.4) Measures of the relation between variables 2.5) Visual representation of the relation between variables Which specific measure or visual representation is possible depends on the scale type Scale type = the level on which the variable was measured a consequence of how the question was asked and how the answer could be given Four possibilities 1) Nominal - Gender / Nationality/ Jersey numbers Nominal scales do not reflect an amount They reflect whether a participant belongs to an exclusive category Few analytical possibilities: frequencies and percentages 2) Ordinal - Ranking in sports / Preference for 3 activities / Top 100 popular songs The basic purpose is to ranking The ranking is relative (more than or less than) The distance between answers is not equal 3) Interval - Likert scale Basic purpose is again ranking But distances between answers is now equal The “zero” is arbitrary 4) Ratio - Money / Time / Weight Again: basic purpose is to rank and the distances are equal. The zero is fixed and has a meaning. Example Question: What is your age? Answer option 1 ORDINAL ○ < 20 years; ○ 20-30 years Answer option 2 RATIO -> ________ years 42 2. 1) Measures of central tendency of a single variable What does the “middle” value look like? What would be most likely to be the found value of a random participant? -> 2.2 ) Measures of dispersion of a single variable Where do the other values lie when compared to the central tendency? How diffused are the given answers? -> 2. 3) Visual representation of a single variable representing how frequent a certain answer has been chosen. ↓ representing how frequent a certain answer has been chosen for interval and ratio data -> representing how answers are dispersed ↓ 2.4) Measures of the relation between variables Chi-square test 2 variables of ordinal level (e.g., shift (1, 2 or 3) and type of defect found (A, B, C or D). Question: is there a relationship between them? So we make a cross tab for all combinations The chi-square then tests how likely this diffusion of combinations is. Is there maybe a statistical significant relation (e.g., type of defect D less frequent in shift 2)? Correlation 2 variables of minimal interval level (e.g., price of product and sales) Question: is there a relationship between them? So we calculate their correlation coefficient (= degree and direction in Which these two variables may covary, which lies between -1 and +1? 2.5) Visual representation of relation between variables 43 2.6 Last step: assessing the goodness of measures Goodness of measures > Reliability For some variables, you may want to create a new variable, which averages the answers. Before creating this new variable, you first have to assess whether or not you may indeed sum up these 5 separate variables... whether this new variable will give you a reliable idea of the participants’ views. = Cronbach’s alpha measurement of all possible correlations between sets of two variables of these five a number between 0 and 1. the higher the correlation, thus the more reliable the new variable Validity How sure are we that the values we found are the actual values? There are different ways to assess validity Factorial validity: do measures that should covary, indeed covary in a high degree Criterion-related validity: is there indeed a difference in measure for participants we expect to differ Convergent validity: do two sources, that we expect to have similar view, indeed have similar measures Discriminant validity: are two variables which should not correlate, indeed have no correlation? 3. Hypothesis testing = tentative, yet testable, statement that predicts what you expect to find in your empirical data In reality, we have two hypothesis for each expected relationship Null hypothesis H0 stating there is no effect Alternative hypothesis H1 stating the effect The null hypothesis (no effect) is assumed to be true until statistical evidences indicates otherwise. In quantitative analysis we thus test whether or not we can reject the null hypothesis, thus supporting the alternative hypothesis of an effect. Note: For each relationship a new set of hypothesis (H0 and HA) with A being replaced by the number of the relationship What is hypothesis testing? 3.1. You state the null and the alternative hypotheses 3.2. You choose the appropriate statistical test and execute this 3.3. You determine the level of significance desired (i.e., this determines whether or not you will reject the null hypothesis, it is the degree of confidence by which you will allow yourself to support the alternative hypothesis) 3.4. Look at the output of the executed test. Compare the found level of significance with the desired level of significance. If lower, you can reject the null hypothesis! 3.1. So with hypothesis testing, we are testing whether we should reject the null hypothesis in favor of the alternative hypothesis OR accept the null hypothesis We do this with a chosen degree of confidence Which means … we can be wrong … 44 reject the null hypothesis in favour of the alternative hypothesis Type I error = probability of rejecting the null hypothesis while this actually was true. In other words: we say there is a difference while in fact there is not. This is the significance level (alpha) we reject the null hypothesis if our found p-value is below usually alpha = ,05 5% chance of type I error accept the null hypothesis Type II error = probability of failing to reject the null hypothesis while this actually was not true. In other words: we say there is no difference while in fact there is. This is inversely related to type I error The probability of committing a type II error is equal to one minus the power of the test, also known as beta. Thus: 1 - beta

Use Quizgecko on...
Browser
Browser